The existence of health-related mis- and disinformation especially on the Internet has been evident not only during the novel coronavirus (SARS-oV-2) and COVID-19 (e.g., Arif et al. 2018; Pías-Peleteiro et al. 2013; Scullard et al. 2010), but for centuries. However, the enormous abundance of (uncertain) online information worldwide has led to what the WHO calls an ‘infodemic’ (World Health Organization [WHO] 2020), making it difficult for people to find evidence-based information and to distinguish correct from incorrect information. Whether the neologism infodemic is used properly here can be doubted (Simon and Camargo 2021), but what remains certain is that health-related misinformation has been shown to have a variety of negative consequences for individuals and society (Lewandowsky et al. 2021).
Developing evidence-based recommendations on how to adequately debunk health-related mis- and disinformation and myths in communication is important for not only individual health but also society as a whole (Cook et al. 2017; Swire and Ecker 2018). For example, the idea that the MMR vaccine causes autism is a myth whose spread poses a risk to society. It is still widespread on the Internet, although it has been repeatedly exposed as a myth in the media and debunked by strong scientific evidence (Scullard et al. 2010). The U.S. vaccination rate probably decreased significantly as a result of the spread of this myth (Poland and Spier 2010). The economic burden of measles outbreaks in the United States was estimated at several million dollars (Ortega-Sanchez et al. 2014; Smith et al. 2008); the number of measles cases has also increased in Germany in recent years (Robert Koch Institute 2018). Another example, denying the scientific consensus that HIV causes AIDS led to policies in South Africa between 2000 and 2005, which are estimated to have contributed to 330,000 excess deaths (Chigwedere et al. 2008).
Misunderstandings and inaccurate knowledge can occur because people in everyday life have limited time, cognitive resources, and/or motivation to understand complex scientific topics (Cook et al. 2017; Swire and Ecker 2018). Some people just believe what they have once heard or read from their parents and friends or online. However, regardless of how the misinformed beliefs were built, they are relatively stable in recipients’ cognitive/mental model, quite resistant to persuasive corrective messages and thus difficult to eliminate (Cook et al. 2017; Ecker et al. 2011; Lewandowsky et al. 2012; Swire and Ecker 2018). Corrective messages can even cause backfire effects, i.e. unintended effects when originally incorrect attitudes are further reinforced by receiving a correction message (Cook et al. 2017; Lewandowsky et al. 2012; Nyhan and Reifler 2014). Special debunking strategies are necessary to correct myths once established (Cook and Lewandowsky 2011).
Researchers have assembled a collection of recommended best-practice debunking strategies (e.g., Cook and Lewandowsky 2011; Dan 2021; Swire and Ecker 2018). Among other things they call for debunking texts that visually support the correction explanations (e.g., through graphics; Cook and Lewandowsky 2011; Dan 2021; Nyhan and Reifler 2019). A reason is the assumption that pictures can increase the perceived credibility of the core statement in a correction (Dan 2021).
The present study investigates the effects of debunking texts created according to the latest research for four different health myths on recipients’ belief, (future) behaviour and feelings regarding the health myths. The current state of research does not provide insights into how debunking strategies with different images can change the attitudes regarding health myths. The second aim of the study is to investigate the effect of different visualisations in the debunking texts. After discussing health myths and their distribution on the Internet, we review the research on debunking of misinformation and visual health communication and its effects. This forms the basis for the research questions and hypotheses. The research design and methodology of the online survey experiment and the results are presented next, followed by a discussion.
Health myths and their distribution on the internet
Health myths can be defined as health-related statements that are generally disseminated, many people believe it and either are not supported by scientific evidence or have strong scientific evidence that speaks against rather than for them. Mostly they are pseudo-scientific explanations that may have intuitive appeal (Shmerling 2019). Originally, old health myths like “swimming after eating is dangerous” are typically provided by individually trustworthy sources, such as parents, grandparents and friends (Donovan and Thompson 2010; Northwell Health 2017), which is why they are credible and persistent. Health myths have also been reinforced and disseminated via the Internet (Cook 2019; Lewandowsky et al. 2012; Scullard et al. 2010). Many people (41%) who did not use the Internet to find health information say they usually ask their friends, relatives, or other people (Eurobarometer 2014). Health myths are mostly plausible, easily understood stories that sound like truth and wisdom. Many myths have arisen from outdated or misinterpreted scientific findings; others are couched in what seems like common sense or logic, such as that reading in the dark harms the eyes (Donovan and Thompson 2010; Vreeman and Carroll 2007) or alcohol enhances digestion (Heinrich et al. 2010). Therefore, myths are often based on a grain of truth in combination with misinterpretations, wishful thinking, or fears.
The actual spread of health myths on the Internet is still underresearched. The media conditions of the Internet, such as dynamics, multimedia, multimodality, reactivity and content personalization, make systematic data collection difficult, so the spread of health myths has not yet been analysed at all in the German-speaking world and only marginally in the international context. Primarily, the extent to which websites speak for or against vaccination myths was examined on a semantic-textual level. Pías-Peleteiro et al. (2013) assessed Spanish websites on the human papillomavirus (HPV) and concluded that 45.5% of all blogs and forums and 25% of all press sites analysed disseminated information on vaccination deterrence. Results from Scullard et al. (2010) and Madden et al. (2012) also showed that more or less false information is found depending on the health topic and that it varies by website type. For example, governmental, academic and non-profit websites can be trusted most when it comes to vaccination information, while sponsored sites are least trustworthy (Madden et al. 2012; Scullard et al. 2010). Since health information on the Internet is topic-dependent (Scullard et al. 2010) and location-dependent (Arif et al. 2018) dependent and these sources vary in their quality and accuracy, this also contributes to the dissemination of misinformation.
Debunking of misinformation beliefs
Debunking is about changing the beliefs of recipients. This can be seen as a persuasive effect because persuasion in its broadest sense describes the process in which one actor attempts to change the beliefs of another social actor through the use of communication (Dillard 2010). Beliefs are defined by Wyer and Albarracín (2005) as “estimates of subjective probability which, in the case of propositions, are reflected in either (a) estimates of the likelihood that (b) expressions of confidence or certainty that the proposition is valid, or, in some cases, (c) agreement with the proposition” (p 277). They are thus to be regarded as subjective estimates of the probability that certain knowledge is true. A person’s beliefs can also be false, that is, they cannot correspond to the truth (Perloff 2017). Wyer and Albarracín (2005) describe beliefs as being based on verbal, emotional and visual information and able to be influenced cognitively as well as affectively and conatively. It can also be assumed that beliefs consist of a cognitive, affective and conative component (Kruglanski and Stroebe, 2008). These components can now potentially be influenced by communication. If this process is intentional, it can be persuasion. According to Perloff (2017), persuasion can be defined “as a symbolic process in which communicators try to convince other people to change their own attitudes or behaviours regarding an issue through the transmission of a message in an atmosphere of free choice” (p 22). The early definition by Bettinghaus and Cody (1987) also emphasises the persuasive effects that communication can have on beliefs, emotions and behaviour. Persuasion is “a conscious attempt by one individual to change the attitudes, beliefs, or behaviour of another individual or group of individuals through the transmission of some message” (Bettinghaus and Cody 1987, p 3).
Previous meta-analyses examining the effect of correction compared to uncorrected control conditions have shown that corrective messages can significantly reduce the belief in misinformation (Blank and Launay 2014; Chan et al. 2017; Walter and Murphy 2018). The exposure to a correction is better than not receiving a correction at all.
Researchers have assembled a collection of recommended best-practice debunking strategies (Cook and Lewandowsky 2011; Dan 2021; Ecker et al. 2015; Lewandowsky et al. 2012; Swire and Ecker 2018). For example, the debunking text should support credibility judgments (Swire and Ecker 2018). In mass communication, basing claims on evidence (e.g., study results or expert opinions), adequately referencing the evidence and presenting data in an comprehensible way will build credibility and thus contribute to a greater efficacy of the corrections (Gigerenzer et al. 2007). Furthermore, to avoid making people more familiar with misinformation (and thus risking a familiarity backfire effect), a debunking text should emphasise the intended facts rather than the myth (Cook and Lewandowsky 2011; Lewandowsky et al. 2012). A debunking text should also be simple, easily understandable and brief (clear language and graphs where appropriate; Cook and Lewandowsky 2011 and Lewandowsky et al. 2012). If the myth is simpler and more compelling than the debunking, it will be cognitively more attractive, which will risk an overkill backfire effect (Cook and Lewandowsky 2011; Lewandowsky et al. 2012). Further, an effective debunking text requires a factual replacement for the causal explanations initially supplied by the refuted misinformation (Cook and Lewandowsky 2011; Ecker et al. 2015). To effectively debunk misinformation, messages should provide a coherent and detailed explanation that enables recipients to update complete mental models and even describes why the misinformation was disseminated (Chan et al. 2017; Lewandowsky et al. 2012; Walter and Murphy 2018; Walter and Tukachinsky 2019). Last, graphical information is probably more effective than text in reducing misperceptions (Cook and Lewandowsky 2011; Dan 2021; Nyhan and Reifler 2019).
However, on average, correction does not entirely eliminate the effect of misinformation; there is a continued influence of misinformation (Walter and Tukachinsky 2019). The phenomenon of maintaining beliefs regarding explicitly contradictory evidence is also called “belief perseverance” (for an overview, see Anderson 2008). One reason for this is mental models, which appear to be the most consistently supported explanation for the (non)correction of misinformation (Walter and Tukachinsky 2019). It is assumed that health information forms mental models, which provide simple, causal explanations for facts and observations (Johnson and Seifert 1994). If these explanations, information, or whole health myths are now debunked, an unpleasant gap develops in that mental model. This is one reason why many people tend to believe in easily accessible myths that are incorrect but simple, coherent and complete (Cook and Lewandowsky 2011; Johnson and Seifert 1994; Lewandowsky et al. 2012). Representations of the valid and invalid information might also coexist side by side in memory and compete for activation (Swire and Ecker 2018). The theory of cognitive dissonance is based on the assumption that people tend to have cognitive consistency and desire consonant relationships between their cognitions. Cognitive dissonance is a state of mental imbalance resulting from inconsistent relationships between cognitions (Festinger 1957). Further, it is assumed that individuals not only attempt to reduce dissonance but also actively avoid information in which it is to be expected (Festinger 1957; see also confirmation bias, Pohl and Pohl 2004). In contrast, there are also processes in which people consciously reject corrective statements and thereby stabilise misinformation-motivated reasoning (Kraft et al. 2015). People are sometimes unwilling to accept new information, especially corrective information that contradicts their views (Cook et al. 2017; Nyhan and Reifler 2010). Although they know the scientific evidence, they refuse to accept it. It can therefore be noted that the rejection of scientific findings and the stability of misinformation are maintained not only by an uninformed population or the ever-increasing spread of misinformation but often by individually motivated characteristics and information processing (Cook et al. 2017).
Different factors can influence the effectiveness of debunking, such as individual predispositions, message factors, source factors etc. In terms of the nature of misinformation, research results show that the correction of real-world misinformation, which exist in real, as opposed to constructed misinformation, which are invented for a study, is more challenging (Walter and Murphy 2018). In terms of individual predispositions, studies show that a greater degree of scepticism can lead to a better refutation of misinformation (DiFonzo et al. 2016; Lewandowsky et al. 2012; Swire and Ecker 2018). The attributed credibility of a message and a source are regarded as decisive influencing variables (O’Keefe 2002; Pornpitakpan 2004). Using relevant evidence can increase a message’s credibility and persuasive effect. At the same time, corrections are less effective if the misinformation was attributed to a credible source (Walter and Tukachinsky 2019). Further, corrections are less effective if the misinformation was repeated multiple times prior to correction, or there was a time lag before the correction, both are mostly the case with health myths (Walter and Tukachinsky 2019). The longer a mental model of health myth is held, the more it becomes integrated into memory and difficult to eradicate (Ecker et al. 2015).
Value- and belief-incongruent news can often have backfire effects, which has already been proven for controversial topics, such as climate change (Hart and Nisbet 2012) or vaccine safety (Nyhan and Reifler 2014). These are unintended effects when originally incorrect attitudes are further reinforced by receiving a correction message (Cook et al. 2017; Nyhan and Reifler 20102014). The effect can be partially explained by the fact that people counterattack attitudinal mismatches/cognitive inconsistencies to strengthen their existing attitudes, for example, more attitudinally mismatched/cognitively inconsistent information is mentally activated than before the perception of a debunking message, which in turn leads people to report and have more extreme attitudes than before (Lodge and Taber 2000; Nyhan and Reifler 2010).
One focus of this study lies in professional debunking effects, according to the latest research on online debunking texts about health myths. Most previous studies on debunking of misinformation have focused mainly on the U.S. context and often used college students for convenience purposes as the meta-analysis of Walter and Murphy (2018) shows. However, nationality, education and age are relevant influence factors of debunking effectiveness (Walter and Murphy 2018). There is some weak evidence in the meta-analysis that corrections work better for student samples compared to nonstudent samples. The few empirical studies that highlight the intercultural facets of the debunking of misinformation support the possibility that corrective messages produce different outcomes in different societies (e.g., Cook and Lewandowsky 2016). This study aims to be representative in terms of age, gender and education in Germany.
-
Research Question 1: What is the persuasive effect of corrective messages on the Internet about health myths on recipients’ belief, (future) behaviour and feelings regarding the health myth?
-
Hypothesis 1: The more credible a debunking text is perceived, the more corrective effectiveness a debunking text has.
Visual health communication and its effects
Health information in (online) newspapers is usually communicated textually and/or visually. In general, images in scientific and medical communication have a persuasive power, which can vary in their strength depending on the type of image (e.g., Arsenault et al. 2006; Kessler et al. 2016; McCabe and Castel 2008). Images that make evidence visible can be used as a persuasive tool because images generate more attention and are easier to process and remember than textual content and individuals consider most of what they can visually capture to be true (Holicki 1993). In online health communication, for example, images are often used as visual arguments for the evidence of research results or a proxy for the evidence of a fact. The evidence-giving illustrations range from Roentgen images and photographs to tables and diagrams to models and preparations (Arsenault et al. 2006). Diagrams present results in great detail, accurately and systematically and create more clarity and less opportunity for misinterpretation than purely linguistic mediation or numerical comparison do (Pluviano et al. 2017). The aim of diagrams is to illustrate structures, structural changes and connections and depict proportions (Isberner et al. 2013). Technical or machine-technical created images, such as MRI or Roentgen images, provide insights into areas that are inaccessible to the human senses (Lohoff 2008). In this context, they are regarded as measuring instruments as well as replacements for the perception of the human eye and thus also serve as empirical evidence (Lohoff 2008). A picture on which an expert is depicted is said to be less scientific and have less evidentiary power than diagrams or machine-technical created images (Kessler et al. 2016). However, experts on a picture convey authenticity and so also serve as credibility heuristic (Holicki 1993). The photograph of a single scientist can overcome a deference to science bias from a text-only weight-of-evidence article because it showcases an episodic frame in a visual format (Dixon et al. 2015). People then may use their recall of the exemplar when making judgments about what scientists in general believe (Dixon et al. 2015). All of these image types ultimately convey a wide variety of information and evidence and represent these in many ways.
Images can influence recipients’ attitudes and behaviour about a specific issue to a greater extent than a purely text-based message can (Holicki 1993; Kessler et al. 2016; Nyhan and Reifler 2019). In general, images enhance cognitive processing and generate more attention, which is why they are better remembered than textual content (Holicki 1993; Houts et al. 2006). The increased comprehensibility, cognitive processing and better memory lead to images also having a stronger influence on attitudes (Arsenault et al. 2006; Lohoff 2008). In general, images are learned, retained, understood and recognised more easily and quickly than words, since they are received in larger units than textual or verbal information, which recipients capture sequentially (Holicki 1993).
McCabe and Castel (2008) examined how the effects of different scientific image types differ from one another in a one-sided article, as did Kessler et al. (2016) in an article that focused on controversy. Both studies found that machine-technical created images have a stronger persuasive effect than the same message with diagram or image of an expert and texts with pictures had more persuasive effect than texts without pictures. Nyhan and Reifler (2019) analysed the effect of text and diagram on factual misperceptions regarding climate change. Their results show that the graphic information has a stronger impact on reducing misinformation than pure text. However, the text only depicted the information from the diagram and was not an original debunking text. Pandey et al. (2014) examined the persuasive influence of diagrams in comparison to tables on three different topics. The influence of prior attitudes becomes clear in this study; the graphics are more convincing when they reinforce recipients’ prior attitudes and when the initial attitude is not strongly polarised, charts seem to have a stronger effect than tables on persuasion likelihood and attitude change. Pluviano et al. (2017) compared the effect of text, tables and images of sick, unvaccinated children with the debunking of vaccine misinformation. The pictures had an influence, but no intervention strategies worked; the belief in vaccine myths and a desire not to vaccinate children increased over time (backfire effects).
How an image ultimately affects the recipients, however, also depends on various influencing variables. For example, as with text-based persuasion processes (Kapoor et al. 2020), recipients’ attributed credibility also influences image persuasiveness (Kessler et al. 2016).
Attributed credibility can generally be said to have a great influence on beliefs and thus on the effects of communication in general (Bentele 1988; Pornpitakpan 2004). One of the most important criteria used to filter information and judge it as reliable is credibility. The persuasiveness of a communication is strongly dependent on whether it is perceived as credible or not (Kapoor et al. 2020; Pornpitakpan 2004; Valentini 2018). As an important heuristic, credibility is a filter in the process of knowledge acquisition and simultaneously controls this process (Bentele 1988; Schweiger 1998). Ultimately, credibility research examines any variables that can be used to make decisions and judgements about the credibility of a piece of information (Reinhard and Sporer 2010; Valentini 2018). In situations of uncertainty, credibility provides necessary orientation. Credibility is based on the subjective perception that a piece of information corresponds to the truth and thus determines the recipient’s degree of willingness to adopt the information received from the source as cognition (Eisend 2006, Valentini 2018). Credibility can already be defined as a property attributed to people, institutions, or their communicative products (oral and written texts, audio-visual representations) by someone (recipient) in relation to something (events, facts, etc.) (Bentele, 1988, p 408; Valentini 2018). Credibility arises in communication starting from the recipient and consequently must be viewed and measured as attribution from a recipient-oriented perspective (Bentele 1988; O’Keefe 2002; Roberts 2010; Schweiger 1998). Credibility is a hypothetical construct that arises from interactions between source, message and recipient (Kapoor et al. 2020; Roberts 2010).
In the study by Kessler et al. (2016), the credibility attributed to a journalistic article with different illustrations was shown to be a mediator in the persuasion process of the article content on the attitudes of the recipients. In this specific case, this means that a high credibility assessment by the recipients influenced the attitudes. Participants who saw the articles with pictures rated the content as more credible than those who read the version without pictures and recipients who found the articles more credible also had stronger attitude changes.
However, even though images are increasingly being used in online communication, the field of visual persuasion in the debunking context is not yet well researched and the current state of research does not reveal how debunking strategies work with different evidence-based images in relation to different scientific issues and health myths (Cook 2019).
-
Research Question 2: What is the persuasive effect of different images in corrective messages about health myths?
-
Hypothesis 2: A corrective message with a machine-technical created image has a stronger persuasive effect on recipients than the same message containing a diagram; which has stronger effect than the same message containing an image of an expert; which has a stronger effect than the same message without an image.