Keywords

1 Introduction

Every day, our judgments and opinions are formed and affected by new information. During the COVID-19 pandemic, people around the world were bombarded by new information: on the origins of the disease, contamination numbers, transmission modes, lockdown measures, vaccination policies, and health risks. This information is often provided by sources that are well-known to the public, such as the World Health Organization and national health organizations, such as the NHS (UK), the HHS (US), and the Santé Publique France (France). In addition, most information about COVID-19 on television, the radio, and Internet is given by specific competent people previously unknown to the public, such as people who work as epidemiologists, virologists, or intensive care specialists. During the COVID-19 pandemic, these sources, with their specific fields of expertise, have struggled at convincing the general public to behave in ways that make a way out of the pandemic possible. Although not new (see Johnson et al., 2020), people seem to have less confidence in scientific experts during the pandemic, as has been noted in various countries across the globe. In France, for instance, Le FigaroFootnote 1 reported on a survey carried out by Ipsos that demonstrated a sharp decline of trust of the French in researchers in the first six months of the pandemic. In a similar vein, Hurriyet Daily NewsFootnote 2 in Turkey wrote that members of the Science Board of the Health Ministry have asked their citizens to trust science. These examples underline that opinions of experts are not systematically seen as support for specific claims, such as about the desirability of wearing masks or working remotely. Why do experts sometimes fail to convince an audience? In this chapter, it will be argued that audience acceptance of appeals to experts, that is, the degree to which appeals to experts increase the audience beliefs in claims supported by experts, is conditionals in two ways.

First, acceptance of expert opinions is conditional upon the degree to which appeals to expert opinions respect critical questions regarding the evaluation of these appeals. Argumentation scholars have theorized what characteristics of an argument from expert opinion are in play when assessing the strength of such an argument. In Sect. 18.3, the critical questions will be discussed, and empirical research will be reviewed that addresses the question whether respecting critical questions from argumentation theory matters for actual acceptance of the appeal to expert opinion. For instance, studies have demonstrated that people are more likely to accept an argument from expert opinion when the expert is relatively more knowledgeable (e.g., Hornikx & Hoeken, 2007), and more reliable (e.g., Hoeken et al., 2014). The chapter’s original approach is to review such empirical research from two different disciplines: persuasion studies and argumentation studies.

Second, acceptance of expert opinions is conditional upon the audience’s prior belief in the claim that is supported by the expert. Social psychological work on motivated reasoning has argued that when people have an outspoken position towards a claim (i.e., strong belief or disbelief), arguments are evaluated not on the basis of their merits (cf. respect for critical questions) but on the basis of whether they are in line with the audience’s prior opinion. Section 18.4 presents the notion of motivated reasoning, and discusses recent empirical work on the subjective evaluations of appeals to expert opinion. Section 18.5, finally, integrates the two types of conditions to better understand audience’s reactions to experts in the COVID-19 pandemic. As an introduction to the topic, Sect. 18.2 below will first underline the relevance and importance of experts for everyday belief formation and change.

2 The Relevance of Experts for people’s Beliefs

Most of the information that people use to form opinions or take decisions comes from other people (Harris et al., 2015). These other people may be known, such as family members, friends, coworkers, and neighbours (see Sleeth-Keppler et al., 2015), but also unknown, such as people interviewed on television, or quoted in reports on websites. The characteristics of these other people, such as their competence or attractiveness, play a major role in the degree to which the audience accepts their information (Martin & Marks, 2019). When it comes to more complex issues and problems, such as energy consumption, economic progress, or COVID-19, people will want to rely on other people who are particularly knowledgeable. In such circumstances, people rely on “those experts who have deeper knowledge about a topic and may be able to provide information about the validity and veracity of information” (Hendriks et al., 2015, p. 1).

The question of whom to rely on has become more relevant in times when it is more difficult to assess a person’s credibility (Hendriks et al., 2015; Metzger & Flanagin, 2013). As Hendriks et al. (2015) note, in the digital age, there are fewer cues on the basis of which expertise and credibility can be derived in comparison to a face-to-face interaction with a person. In addition, with fewer gatekeepers than in traditional media (e.g., newspapers, television), there is more misinformation in new media, which increases the need for people to find out what the source of the information is, and how credible the source is (Metzger & Flanagin, 2013).

When experts are used to underline specific claims, such as on vaccination risks, or the origins of the COVID-19 disease, they sometimes provide explanations, refer to scientific work, or present statistical information (Conrad, 1999). However, experts can also be used as a simple signal of credibility without providing additional information. In such circumstances, people expect experts to provide this additional information when asked to. As Facione and Scherer (1978, p. 315) put it, “Instead of the whole proof the reliable authorities give their word. They vouch for the conclusion’s being true, but they could have given the proof. They should be able to supply an acceptable proof upon demand”. Empirical research supports the idea that people indeed expect experts to provide acceptable proof (Bohner et al., 2002; Wiener et al., 1990). In Bohner et al. (2002), for example, participants read a message that was attributed to a professor (high expertise) or a student (weak expertise) and that contained strong or weak arguments. The weak-arguments message was less persuasive when the source was the professor than when it was the student. This means that whereas weak arguments were accepted from students, they were not accepted from professors. From this finding, it can be inferred that people expect relatively strong arguments from experts (just as they appear to expect relatively weak arguments from students).

The role of expertise and experts for human communication has been a topic of interest for many years and in many areas of research, including judgment and decision-making, thinking and reasoning, social psychology, legal testimony, philosophy, and argumentation theory (for examples, see Harris, et al., 2015). For instance, in lay epistemic theory (Kruglanski, 1989), the notion of ‘epistemic authority’ refers to sources that people rely on because of their seniority, education, or role (for an overview, see Kruglanski, et al., 2005). In the social psychology of persuasion, source credibility has been claimed to be most relevant for attitude change in circumstances where people are less motivated and less able to reflect on the content (e.g., arguments) of a message (e.g., Petty & Cacioppo, 1986). This multidisciplinary interest in experts and expertise for belief formation and change underlines the importance of experts in daily human communication.

3 Conditional Acceptance: Norms for Reasonable Argumentation

What kinds of experts are most successful in forming or changing people’s beliefs? What is a strong appeal to expert opinion? This section will discuss the first level of conditions that play a part in the acceptance or success of appeals to expert opinion. These conditions are the critical questions developed in argumentation theory to assess the quality of a given argument. First, these critical questions will be presented (Sect. 18.3.1). These critical questions for reasonable arguments present one angle to successful appeals to expert opinion. Another perspective to studying the conditional acceptance of arguments from expert opinion is to examine people’s responses to arguments that vary in the extent to which arguments comply with critical questions. This perspective is represented by two strands of research: persuasion studies and argumentation studies. Empirical studies in persuasion (Sect. 18.3.2) and argumentation (Sect. 18.3.3) will be reviewed to address the question as to whether what should be normatively persuasive is also persuasive in practice. This approach of the comparison between argumentative norms and successful persuasion (cf. O’Keefe, 2007) is paralleled by a growing body of research in argumentation, such as Van Eemeren et al. (2012) and Schumann et al. (2019).

3.1 Norms for Evaluating Appeals to Expert Opinion

In argumentation theory, the appeal to expert opinion (or ‘cognitive authority’, Walton, 1997) is considered an argument on the basis of which people accept a claim. The argument from expert opinion or the appeal to authority has been described by Walton (1997; Walton et al., 2008, p. 14) as follows:

Source E is an expert in subject domain S containing proposition A

E asserts that proposition A (in domain S) is true (false)

A may plausibly be taken to be true (false).

Argumentation schemes, such as this argument from expert opinion, have a normative component because of their inclusion of critical questions that serve as norms to assess the quality of any argument following that scheme. The argument’s quality is believed to depend on the responses that can be given to the questions. Different classifications of argumentation schemes present partly different critical questions (e.g., Hastings, 1962; Van Eemeren & Grootendorst, 1992; Walton, et al., 2008). In this chapter, only the (often cited) questions that Walton (1997, p. 223) proposed for the argument from expert opinion are considered:

How credible is E as an expert source?

Is E an expert in the field that A is in?

What did E assert that implies A?

Is E personally reliable as a source?

Is A consistent with what other experts assert?

Is E’s assertion based on evidence?

Imagine that a claim about positive trends in COVID-19 contamination numbers is underlined by a particular epidemiologist, Dr. Wilson. The reliance on this expert as support for the claim about the positive trends is more justified to the extent that she is highly credible, has a high level of expertise, is personally reliable, etc. Although there are debates in argumentation theory about what critical questions are relevant, and what their normative status is (e.g., Ciurria & Altamimi, 2014; Godden & Walton, 2007; Hahn & Hornikx, 2016; Katzav & Reed, 2004), different classifications of argumentation schemes all agree on the notion of critical questions that are useful for evaluating a particular appeal to expert opinion.

3.2 Persuasion Studies on Persuasive Experts

Are appeals to expert opinion that comply with norms regarding the critical questions formulated in argumentation theory also more successful in convincing an audience than appeals that do not comply with these norms? This question has been addressed from the perspective of persuasion studies.

In persuasion studies, one of the central factors that is expected to determine the outcome of a persuasion process is source credibility (see, e.g., O’Keefe, 2016; Perloff, 2021). Theory and empirical research on source credibility (‘How credible is E as an expert source?’) distinguishes between source expertise (‘Is E an expert in the field that A is in?’) and source reliability (‘Is E personally reliable as a source?’), and has also examined the presence of supporting arguments (‘Is E’s assertion based on evidence?’) and consistency with other experts (‘Is A consistent with what other experts assert?’). This means that, except for Walton’s (1997) critical question ‘What did E assert that implies A?’, for all questions there is relevant empirical research in the social psychology of persuasion. This research is reviewed below.

There is a long tradition of persuasion research on the effects of source credibility (see, for a review, Pornpitakpan, 2004). Wilson and Sherrell (1993) conducted a meta-analysis of empirical studies investigating the effect of manipulating the source’s credibility on persuasive outcomes. They report that, when all empirical evidence is taken together, high-credibility sources are generally found to be more persuasive than low-credibility sources. The largest effects were observed for studies that manipulated the source’s expertise. This variable was manipulated in various ways, such as through the source’s years of experience, study years, and field of expertise in relation to the claim. An example of a study on field of expertise (cf. Walton’s ‘Is E an expert in the field that A is in?) included in the meta-analysis is Maddux and Rogers (1980). In their experiment, the claim about the desirability of four hours sleep a night was supported by two different researchers. One researcher was an expert in sleep research; the other in music during the Baroque period. Field of expertise affected the persuasive outcome: the researcher with a relevant field of expertise was more persuasive than the expert with an irrelevant field of expertise.

Source reliability (cf. Walton’s ‘Is E personally reliable as a source?’) has also been examined in empirical studies. A number of studies have examined the impact of whether or not a source argued for a position that is opposed to the source’s self-interest. The idea is that a source arguing against his or her self-interest should be more reliable than a source who advocates a position that is beneficial to him or herself. Following O’Keefe’s (2016) summary of some of the studies in this area, an effect of position against self-interest on reliability has been observed. This strand of research may be less informative about what makes experts persuasive, since the counterpart of an expert with self-interest is more likely to be an impartial expert than an expert with an interest against the position (as was manipulated in the studies reviewed). This is because we may expect experts to be neutral and objective. One way of expressing neutrality is by paying attention to both sides of a story—which is what was studied in Mayweg-Paus and Jucks (2018). In their study, participants read a discussion between two experts, who communicated a one-sided or a two-sided perspective on a certain issue. Experts who provided pro and contra arguments were found to be more reliable than when each of them provided only arguments from one position.

Next, there is also empirical research relating to Walton’s (1997) question ‘Is E’s assertion based on evidence?’. O’Keefe (1998) presented a meta-analysis of empirical studies on the relationship between the explicitness of justification and persuasive effectiveness. One specific manipulation of justification explicitness is: “variation in the completeness of arguments, that is, whether the advocate explicitly spells out the underlying bases of message claims (provides explicit articulation of the premises, supporting information, and the like).” (O’Keefe, 1998, p. 62). Findings from the meta-analysis show that sources have more persuasive success when they provide more information underlying their claim. Although this research strand did not specifically examine the difference between experts with and without supportive information, the finding of this meta-analysis can be taken as some support for the idea that experts may be more persuasive when they do provide rather than do not provide supplementary information.

When it comes to Walton’s (1997) question ‘Is A consistent with what other experts assert?’, recent work on climate change examining the effects of scientific consensus suggests that experts are more persuasive when their view is consistent with that of other experts. In Study 2 reported in Lewandowsky et al. (2013), only half of the participants were given consensus information, showing that 97% of the climate scientists agreed on anthropogenic global warning. Participants who received consensus information were more likely to accept anthropogenic global warning than those who did not receive that information. The authors conclude that “people may be particularly susceptible to perceived consensus among domain experts when forming their own beliefs about scientific issues” (p. 403). Similarly, Van der Linden et al. (2015) report on an empirical study demonstrating that higher estimations of scientific consensus were causally associated with higher levels of acceptance of climate change. Thus, there is emerging evidence for the idea that consensus plays a role in the impact that experts may have when their opinion is used in an argument from authority.

In summary, research results from persuasion studies can be considered as support for all critical questions that Walton (1997) formulated for the argument from expert opinion except for the question ‘What did E assert that implies A?’. For some of these questions the support is stronger than for others, but—in a general way—there is reason to believe that the argument from expert opinion may indeed be more successful if the expert is more credible, has more relevant expertise, is reliable, is consistent with other experts, and provides more evidence for his/her position.

3.3 Argumentation Studies on Persuasive Experts

The question whether appeals to expert opinion that comply with norms regarding the critical questions are also more successful in convincing an audience will be answered from the perspective of argumentation studies. Whereas persuasion studies (see 18.3.2) were not particularly designed to examine variations in characteristics of experts, argumentation studies have focused on the question as to whether respecting critical questions regarding the argument from expert opinion matter for claim acceptance. Therefore, these studies are presented in the current section. A number of studies on the effectiveness of arguments have investigated the impact of critical questions associated with different arguments, including the argument from expert opinion. This section will review the outcomes of these studies. First, the general procedure of these studies is briefly introduced.

Typically, the studies presented participants with a number of claims that were pretested to score neither as probable nor as improbable. An example of such a claim is: “Obligatory driving lessons for people over 70 can reduce their fear in traffic” (Hoeken et al., 2014). These claims were each followed by an argument that varied in quality. The quality of argument was manipulated on the basis of critical questions associated with the appeal to expert opinion. Three of Walton’s (1997) critical questions have been involved in the studies: the expert’s general credibility (‘How credible is E as an expert source?’), the expert’s field of expertise (‘Is E an expert in the field that A is in?’), and the expert’s reliability (‘Is E personally reliable as a source?’). Researchers have used these critical questions to construct high-quality and low-quality variants of the same argument from expert opinion. For the claim on obligatory driving lessons for people over 70, Hoeken et al. (2014) constructed different arguments from expert opinion, including a normatively strong one: “According to Dr Emiel Bentink, associate professor in Psychology at Utrecht University who did his Ph.D. on anxiety disorders among the elderly, people over 70 feel less anxious in traffic if they are required to take driving lessons” (p. 87). This control condition could be argued to fulfill all critical questions, whether directly (e.g., ‘associate professor’ hints at credibility) or indirectly (e.g., by not mentioning information about lack of consensus). The variant that did not respect the expert’s general credibility started with: “According to Emiel Bentink, a second-year undergraduate student who recently attended a course on anxiety disorders among the elderly” (p. 87). In the condition with an irrelevant field of expertise, the PhD subject was changed into: “who did his Ph.D. on anxiety disorders among adolescents” (p. 87). Finally, reliability was manipulated on the basis of the expert’s vested interest: the expert with self-interest in the claim was expected to be less reliable than the expert without such self-interest. The manipulation started with “According to Dr Emiel Bentink, staff member of the Organisation of Driving School Companies who did his Ph.D. on anxiety disorders among the elderly”. In the studies presented below, participants judged a series of different claims, each supported by arguments with varying quality, on a scale assessing the claim acceptance by asking them how probable they found each claim (e.g., on a 5-point scale from ‘very improbable’ to ‘very probable’).

Table 18.1 shows an overview of the studies regarding the relationship between claim acceptance and the manipulation of an argument from expert opinion on the basis of Walton’s (1997) critical questions. This table treats each research result in an equal way, although there are major differences in sample size of the underlying datasets–from 70 participants in Hornikx (2016) to 300 in Hornikx and Hoeken (2007). In a very general way, three conclusions may be drawn. First, claims are accepted to a lower degree when the supporting argument from expert opinion is based on an expert with a vested interest (low reliability) rather than without a vested interest (high reliability). This conclusion is based on the three studies mentioned in Table 18.1 for the critical question ‘E personally reliable as a source?”.

Table 18.1 Is claim acceptance of the argument from expert opinion affected by a manipu-lation of Walton’s (1997) critical questions?

Second, in most studies, evidence was found in support of the relationship between field of expertise and claim acceptance. That is, in most of the six studies, experts that underlined a claim that was part of their own field of expertise were more successful than experts underlining a claim that did not fall under their expertise. In one of the two studies with an incongruent result (Hornikx & Hoeken, 2007), this result was actually consistent with a socio-cultural explanation. French participants not being sensitive to the expert’s field of expertise has been related to the role of experts (i.e., teachers, professors) in education in France, in which these experts are attributed a relatively wide field of expertise (see Hornikx, 2011).

Third, the level of expertise did not affect claim acceptance in the two studies that examined this relationship. This would mean that ‘How credible is E as an expert source?’ is not a relevant question for the assessment of the argument from expert opinion. For at least two reasons, this conclusion does not seem warranted. First, credibility has been proven to be the result of expertise and reliability (see O’Keefe, 2016); the positive results for field of expertise and vested interest (reliability) therefore strongly imply that credibility, as an overall qualification, matters for claim acceptance. Second, more than vested interest or field of expertise, level of credibility seems to be a perception on a continuum. The difference between a professor and a second-year undergraduate student who study the same discipline may be smaller than between an expert with and an expert without a vested interest. The manipulations in the two studies were therefore, perhaps, less successful in differentiating sharply between high credibility and low credibility.

In this section, results of recent studies on the persuasiveness of arguments were presented. Overall, the picture that emerges is that for field of expertise and reliability respecting critical questions matters for the impact of the argument from expert opinion on claim acceptance. There are, however, two limitations that characterize this set of studies. The first limitation is that the results only hold for situations in which people only view a claim with an argument. The impact of critical questions on claim acceptance is likely to be smaller in real-life settings where claims and arguments are part of longer texts or messages, and are not clearly marked as claims and arguments. In fact, empirical research has shown that normatively strong arguments were not more persuasive than normatively weak arguments when presented in longer texts (e.g., Hoeken & Hustinx, 2007; Hornikx, 2016, 2018).

The second limitation is that the results are only relevant for situations in which people assess claims about which they have a neutral opinion. Although people are likely to have neutral opinions about dozens of claims, they also hold more extreme positive and negative opinions about other claims. For these kinds of claims, the quality of an argument (e.g., the quality of the expert in an argument from expert opinion) may not play a similar role as for neutral claims. This brings us to the second level of conditions for the acceptance of appeals to expert opinion: prior beliefs.

4 Conditional Acceptance: Prior Beliefs

In this section, it is argued that acceptance of expert opinions is also conditional upon the audience’s prior belief in the claims. As an introduction, the notion of motivated reasoning will be discussed in 4.1. Subsequently, recent empirical work on the subjective evaluations of appeals to expert opinion is presented.

4.1 Motivated Reasoning and Evaluation of Arguments

The often implicit rationale behind the critical questions is that they provide a rational, golden standard against which argumentation schemes should be assessed, in a similar way as logic has for a long time been considered the standard for appropriate evaluation of formal arguments (for a discussion, see Hahn & Oaksford, 2006). This means that one could say that people reason in a correct way by taking into account the critical questions regarding the argumentation scheme that is associated with the concrete argument that people evaluate. According to the Argumentative Theory of Reasoning (Mercier, 2016; Mercier & Sperber, 2011), people are expected to be fairly good at evaluating arguments. Although their theory does not specify the use of critical questions as a tool for normatively good evaluation, this connection has been made by Hoeken et al. (2014, pp. 92–93): “Mercier and Sperber (2011) argue that people are skilled at evaluating arguments, which would imply that people are sensitive to norms for argument quality”.

However, Mercier and Sperber note that people sometimes also evaluate arguments less objectively, namely in the situations in which they agree or disagree with the claim that is supported by arguments. Reasoning in these circumstances has been labeled as ‘motivated reasoning’ (Kunda, 1990), which means that people are motivated to reach or hold a specific conclusion, and that they evaluate information in line with this conclusion (for recent discussions, see Hahn & Harris, 2014; Kruglanski et al., 2020).

There is ample research evidence supporting the idea that people evaluate information on the basis of their existing beliefs and attitudes (see, e.g., Ditto & Lopez, 1992; Edwards & Smith, 1996; Lord et al., 1979; Mojzisch et al., 2014; Taber & Lodge, 2006; but see Martire et al., 2020). The experiment that has a classic status in this domain is the study reported in Lord et al. (1979). Participants were American proponents or opponents of capital punishment. They were all asked to evaluate two studies on the deterrent effect of capital punishment; one study showed a deterrent effect, the other study an antideterrent effect. Participants were found to be more critical towards the study that found a result that was inconsistent with their position as proponent or opponent, and less critical towards the same study with a result supporting their initial position. This means that participants did not evaluate the study information on its objective merits but on its match with their own position.

In their summary of key readings in this area, Mercier and Sperber (2011, p. 67) conclude that “motivated reasoning leads to a biased assessment: Arguments with unfavored conclusions are rated as less sound and less persuasive than arguments with favored conclusions”. How does this apply to the argument from expert opinion? In the next section, recent research on subjective reasoning with expert opinions will be discussed.

4.2 Evaluation of Expert Opinions

In Sect. 18.3.2, an overview of persuasion studies on the role of expert opinions was presented. Early work in this area already observed that strong experts were not always more persuasive than weak experts. That is, under certain conditions, weak experts (low credibility) were more persuasive than strong experts (high credibility) when the position advocated was in line with the receiver’s attitude (see, for discussions, Pornpitakpan, 2004; O’Keefe, 2016). From the discussion that follows below, there seems to be a growing interest in the relationship between receivers’ prior position towards a standpoint (counter- or pro-attitudinal stance) and the impact of expert opinions.

Biased evaluation of experts was shown in empirical work reported in Zaleskiewicz and Gasiorowska (2018; cf. Zaleskiewicz et al., 2016, Study 3). In Study 1, they manipulated the position of participants (positive or negative) and the position of financial experts (positive or negative) on the desirability of two financial products: investments, and life insurance. The participants read about an advisor (expert) who was either positive or negative about the product, and a scenario followed that manipulated their own position towards the products. Finally, participants assessed the expertise of the advisor. Results showed that expertise was rated as higher under conditions of congruence (their position was similar to that of the advisor) than under conditions of incongruence (their position was dissimilar to that of the advisor). That is, participants who were instructed to be in favor of the financial product rated the advisor who was positive about the product as having more expertise than the advisor who was negative about the product. Similarly, participants instructed to be against the financial product rated the advisor who was negative about the product as having more expertise than the advisor who was positive about the product.

Marks et al. (2019) examined what sources people prefer to inform them: people whose political view they share or people who are experts? Participants had to solve shape tasks, in which they were asked to indicate whether each new shape belonged to one or the other category. Prior to this task, they had learned about the political views of other participants (‘sources’), and about their competency in solving the shape tasks. While solving the shape tasks, participants were asked which sources’ response they wanted to see to be better informed. These sources were either politically like-minded as the participant or not, and they were either competent in the task (high expertise) or not. Results from this experiment were striking: participants preferred source similarity over source expertise. That is, they preferred viewing the response from people they agreed with (politically like-minded) but who did not have expertise in the task over people who did have expertise in the task but who were politically dissimilar to them. This means that people assess experts on the basis of the degree to which a source relates to their own point of view—even when that source is incompetent.

The strongest evidence to date on the biased evaluation of the argument from expert opinion comes from Hoeken and Van Vugt (2016). For a claim on the desirability of mixed schools, they constructed arguments supporting the desirability (e.g., better school results) or supporting the undesirability. For both kinds of arguments, some variants were normatively strong (e.g., for the argument from analogy, Sweden in the argument was comparable to the Netherlands in the claim) and other variants were normatively weak (e.g., for the argument from analogy, France in the argument was less comparable to the Netherlands in the claim). The experts in the argument from expert opinion had a vested interest (e.g., a director of a mixed school) or no vested interest (a child educator). Participants were instructed to defend either the desirability or the undesirability of mixed schools. They were asked to evaluate all arguments in a think-aloud protocol. After analyzing these evaluations, Hoeken and Van Vugt (2016) first concluded that participants used more criteria relevant to the arguments at hand (e.g., vested interest, representativeness, similarity) when evaluating arguments that ran counter to their own position than when evaluating arguments that were in line with their position. This means that people are, indeed, more critical about arguments that attack their own position, and that this criticism materializes through the use of critical questions. A second conclusion of the study was that people use these criteria in a biased way, that is, they use them to show the low quality of arguments that run counter to their view, and that they use them to show the high quality of arguments that are in line with their view. A concrete example of this second conclusion is the following. Participants appreciated the child educator (strong argument from expert opinion) for his expertise when they agreed, but disqualified his expertise (Is he an expert on mixed schools?) when they disagreed. In a similar way, opponents pointed towards the source’s vested interest (weak argument from expert opinion), whereas proponents underlined the relevant field of expertise. Thus, the same experts were either qualified or disqualified on the basis of the same set of criteria relevant to the argument from expert opinion. Rather than on the basis of an objective assessment of the quality of the expert, participants used the criteria in a biased, subjective way depending on their (dis)agreement with the position of the expert.

5 The Role of Experts in the COVID-19 Pandemic

Other people play a major role in shaping our opinions, beliefs, and decisions. People who can be expected to be reliable and competent—experts—can be particularly influential, but sometimes they are not, as examples from the COVID-19 pandemic showed. In this chapter, the acceptance of an appeal to expert opinion was argued to be conditional upon norms for reasonable argumentation, and upon the audience’s prior beliefs in claims supported by experts. Empirical research in persuasion studies and argumentation underlines that experts are indeed influential to the degree that they are more credible, have more relevant expertise, are reliable, are consistent with other experts, and provide more additional evidence for their position (e.g., Hoeken, et al., 2014; Hornikx & Hoeken, 2007). This means that people’s acceptance of an expert opinion is conditional on critical questions from argumentation theory. Moreover, this acceptance is also conditional on the receiver’s prior beliefs. If people already have an explicit opinion or belief, they will judge the experts depending on the congruence between the expert views and their own view. This congruence leads to a subjective, biased evaluation of the criteria associated with the argument from expert opinion.

What do these results mean for understanding the acceptance and lack of acceptance of experts during the COVID-19 pandemic?

First, in the beginning of 2020, various sources emerged on internet and the television, and people were forming—partly on the basis of these sources—their own opinion on novel topics such as contaminations, risks, and vaccines. Many sources presented themselves at the same time, obscuring the distinction between true experts and other people with outspoken ideas. The most likely factor that has played a role in the lack of influence of experts is the weak consensus between experts.Footnote 3 This lack of consensus was not surprising for at least two reasons. First, different sources communicated evidence and opinions on the basis of information that was constantly changing. Their opinions on how the virus is able to contaminate other people changed with new research evidence presented. Second, experts may view the same issue (e.g., the need for working remotely) differently according to their own discipline. For instance, epidemiologists may strongly advise to work remotely, while economists are more likely to point to drawbacks of working remotely for the economy. The consequence of this lack of scientific consensus is that people (as shown in Sect. 18.3.2) had lower expertise ratings of experts and, similarly, were less inclined to accept the claims they supported.

Second, people are likely to have assessed the expertise and reliability of experts on the basis of their beliefs in COVID-related claims, such as on the usefulness of wearing masks or getting vaccinated. Experts can certainly be effective in the promotion of masks or vaccines (see, e.g., Chevallier et al., 2021), but they lose influence if people already have strong opinions, as shown in Sect. 18.4. Once people have formed beliefs regarding COVID-19 (e.g., about severity of the treat, usefulness of the measures, threats of the vaccine), these beliefs are hard to change (see Lao & Young, 2020; Mercier, 2020)—even by experts.

At the same time, beyond experts, people themselves spread their views, creating a cacophony of opinions and information about COVID-19. It is no surprise, then, that researchers are examining opinion formation and sharing, including misinformation and fake news, in the digital era (see Pennycook & Rand, 2021). For argumentation scholars, there is a clear part to play. They have catalogues of argumentation schemes and associated critical questions for reasonable argumentation, which can be used in examinations of real-life opinion sharing on topics regarding COVID-19. What arguments are exchanged? How reasonable are these arguments? On the basis of what do people dismiss another person’s argument? These kinds of questions can be best addressed by also taking into account theory and research from the argumentation discipline.