1 Introduction

In December 2019, a new coronavirus generated a fast-spreading pandemic, which reached 160 countries by March 2020 [1]. Given that infectious diseases have been responsible for the greatest human death tolls in history [2], the spread of COVID-19 triggered panic, confusion, and uncertainty in the population [3]. This created the perfect storm for the spread of another, equally consequential epidemic: misinformation and conspiracy theories [4, 5]. In this context of uncertainty, an unregulated social media environment provided fertile ground for the dissemination of such beliefs [6,7,8]. Scientific research soon confirmed that most people held at least one misperception about COVID-19 [9], particularly problematic since misinformation has been associated with harmful consequences. For example, belief in conspiracy theories has been linked to decreased vaccination rates [10], increased climate change denial [11], and increased intergroup prejudice [12].

Conversely, knowledge, and belief in accurate information, have been shown to have beneficial effects in times of crisis, consistent with the idea that people’s behavior is influenced by knowledge [13]. For instance, having more COVID-19 knowledge was associated with a lower likelihood of engaging in dangerous behaviors such as going to crowded places or not wearing masks [14]. Therefore, increasing people’s knowledge by promoting accurate information and reducing misinformation is essential during such a global crisis. So far, strategies to increase COVID-19 knowledge through accuracy nudges, such as reminding people to think about accuracy when reading COVID-19-related information, have been found successful at increasing the perceived accuracy of accurate information [15]. Expanding this work, we aim to explore ways in which we could facilitate the acquisition of COVID-19 knowledge, by increasing belief in accurate information and decreasing belief in conspiracy theories. To this end, we will assess the role of information sources and political ideologies in the COVID-19 knowledge assimilation and belief update.

Prior literature has established that the source of information has an important impact on knowledge assimilation. The credibility of the source was found to influence belief change in a variety of domains [16, 17]. Overwhelmingly, statements made by credible sources are more likely to be believed and integrated in one’s mental model [18,19,20]. In the present study, we aim to compare the relative effectiveness of varying sources of information increasing overall COVID-19 knowledge acquisition. We will assess the impact of different sources of information on people’s beliefs in a horse-race design, with the same information being transmitted by (a) groups of people, either ideologically committed or not, (b) ideologically committed individuals (i.e., political figures), and (c) experts. The design employed here will serve to establish which sources are most effective at facilitating knowledge assimilation and whether there are any ideological biases to knowledge incorporation.

Source credibility has also been found to influence concrete behavioral intentions such as voting [21] and purchasing intentions [22, 23]. The current pandemic context allows us to assess whether accumulating knowledge from various sources regarding the COVID-19 vaccine leads to increased vaccination intentions, and which source leads to the highest increase in such intentions.

The influence of groups of people has been investigated in the vast literature on social norms, defined as the perception of what others are doing, approve, or disapprove of [24]. People heavily rely on social norms to understand the situations they are in, especially in contexts of uncertainty, and are a strong predictor of behavior [24,25,26]. However, although people are influenced by norms, their perceptions are often inaccurate, and they tend to either underestimate or overestimate others’ behaviors, especially health-related ones [27, 28]. Leveraging these effects, we are interested in whether COVID-19-related knowledge can be promoted by portraying believing accurate information as normative and believing conspiracy theories as counter normative. Given previous work showing that group norms are also significant predictors of intentions [29], we are also interested in whether promoting knowledge about the COVID-19 vaccines can increase vaccination intentions.

But information sources might not be similarly effective across a given population. A well-established literature shows that motivations to reach particular conclusions affect information processing [30]. This suggests that there might be meaningful differences between liberals and conservatives in how information sources might influence beliefs and therefore knowledge [31]. The first possibility is that people are more sensitive to sources that match their ideology (e.g., a Republican might be more sensitive to information from another Republican than from a Democrat, and vice versa). This possibility is supported by prior work showing that perceived norms are most influential when they arise from others with whom we share a common identity [32, 33]. The second possibility involves a differentiation between liberals and conservatives, such that conservatives might be more resistant to change than liberals, as has been shown before [34, 35]. This is also consistent with a recent study, in which Republicans tended to be less concerned about COVID-19 and less likely to share accurate information about COVID-19 than Democrats [9]. This is perhaps not surprising given that Republican leaders such as President Trump and conservative media outlets such as Fox News have expressed skepticism regarding the risk posed by the virus [36,37,38]. In line with this messaging, a Pew poll conducted in March 2020 estimated that most (59%) Democrats but only a minority (33%) of Republicans viewed COVID-19 as a major threat. The third possibility is that ideology might not interact at all with information sources when it comes to beliefs. This possibility is supported by recent research showing that accurate beliefs about COVID-19 are broadly associated with reasoning skills regardless of political ideology [9].

To investigate, we designed an experiment (Fig. 1) in which participants first rated the accuracy of a set of statements about COVID-19 (accurate information and conspiracies; pretest). Then, in the second phase, they were randomly assigned to one of 10 between-subjects conditions in which we varied the source that provided belief-relevant information: a political leader (President Trump, President Biden), a health authority (Doctor Fauci, the CDC), an anecdote (of a Democrat or of a Republican), a large group of prior participants portrayed as being either Democrats (Democratic Normative), Republicans (Republican Normative), or with no ideological designation (Generic Normative). In the Control Condition, participants skipped the second phase entirely. Importantly, the source always endorsed accurate information and denied conspiracies. Therefore, trusting the source and incorporating their message would always increase scientific knowledge. Finally, participants rated the accuracy of the initial set of statements again (posttest). For more in-depth details regarding the design and procedure, see the Methods section.

Fig. 1
figure 1

Study design. At pretest, participants rated eight COVID-19 statements. Then, the source of the statement was revealed in one of the 10 between-subjects conditions (political leaders, health experts, anecdotes, ideological groups, generic groups, or control). Lastly, participants rated again the initial statements

This experimental design has numerous strengths. First, it accomplishes a horse-race comparison between different sources using the same materials. Second, it uses a US census-matched sample to increase the generalizability of the results. And third, it is conducted during a real-time health crisis, therefore creating an ecologically valid context of investigation. Finally, we replicate this experiment in Study 2, with different stimulus materials (i.e., regarding the COVID-19 vaccine), an increased sample size, and an additional measure of intent to get vaccinated against COVID-19.

Since we are interested in establishing the most efficient source to increase knowledge, this investigation is mainly exploratory. That said, two main hypotheses were formulated based on prior literature. Our first hypothesis was that participants in the Generic Normative Condition will change their beliefs in line with the source, therefore increasing in knowledge compared to the Control Condition. Second, we hypothesized a partisan bias in belief change in the form of an interaction between participant and source ideology, such that participants will change their beliefs in line with the source more, when the source matches their ideology. In other words, Republicans will be more sensitive to Republican sources, whereas Democrats will be more sensitive to Democratic sources.

2 Results Study 1

First, we ran a between-subjects ANOVA with change in knowledge as the dependent variable and condition as the between-subjects variable and found a significant main effect of Condition F(9, 1050) = 10.08, p < 0.001, ηp2 = 0.08 (Fig. 1). To test our first hypothesis, that participants in the Generic-Normative Condition will change their beliefs in line with the source, therefore increasing in knowledge compared to the Control Condition, we conducted an independent sample t test and found that, as hypothesized, participants in the Generic-Normative Condition (M = 13.22, SD = 22.17) increased their knowledge more than participants in the Control Condition (M = 1.42, SD = 9.20) t(137) = 5.01, p < 0.001, Cohen’s d = 0.69, CI [7.14, 16.45]. Additionally, we conducted independent sample t tests assessing the differences in knowledge change between the Control Condition and all other conditions. We found that the Normative Democratic, Normative Republican, Fauci, CDC, and Trump (in the opposite direction) Conditions were significantly different from the Control Condition (Fig. 2; statistics reported in Table 1). We note that the significance level was adjusted for multiple comparisons (i.e., nine comparisons, significance threshold p < 0.0055) using the Bonferroni correction.

Fig. 2
figure 2

Change (posttest minus pretest) in knowledge (belief in accurate information measured from 0 to 100, minus belief in conspiracy theories also measured from 0 to 100) for the target items, in each of the 10 between-subject conditions. Error bars represent ± 1 standard errors of the mean

Table 1 Statistics of nine independent sample t tests comparing all experimental conditions to the control condition

To investigate our second hypothesis, of a partisan bias in knowledge change in the form of an interaction between participant and source ideology, we ran a between-subjects ANOVA with change in knowledge as the dependent variable, condition and participant ideology (Democrats vs. Republicans) as the between-subject variable. We found a main effect of condition F(9, 1038) = 9.99, p < 0.001, ηp2 = 0.08, but not of participant ideology F(1, 1038) = 0.004, p = 0.948, ηp2 = 0.00, and no interaction between the two variables F(9, 1038) = 1.35, p = 0.205, ηp2 = 0.01 (Fig. 3). This suggests Democrats and Republicans are similarly affected by COVID-19 information sources.

Fig. 3
figure 3

Change (posttest minus pretest) in knowledge by participant type (Democrats in Blue vs. Republicans in Red), in each of the 10 between-subjects conditions. Error bars represent ± 1 standard errors of the mean

Thus, using a horse race experimental design and a US census-matched sample, we found that individuals’ COVID-19 knowledge increased compared to a Control Condition when information was provided by large groups of people (Democrats, Republicans, Generic) and health authorities (Doctor Fauci and the CDC), but not when provided by political leaders (Trump, Biden) or anecdotes. We did not find ideological differences in the knowledge integration. Intriguingly, not only did our participants not update beliefs based on information from political leaders, when the source of information was President Trump, they displayed a backfire effect, such that they changed their initial beliefs away from whatever President Trump had conveyed. Given our study design, in which all the sources supported accurate information and refuted conspiracy theories, by moving away from his message, participants decreased their level of COVID-19 knowledge. Of all ten sources tested in this study, this was the only condition in which knowledge decreased from pretest to posttest, pointing to a general skepticism toward any COVID-19 information coming from President Trump.

To ensure the generalizability and replicability of these findings, in Study 2, we investigated these effects in the context of the COVID-19 vaccine. We increased the sample size to increase the power of detecting potential interactions with participants’ political affiliation. Finally, in Study 2, we were also interested in whether vaccine-related knowledge accumulation would predict vaccination intention.

3 Results Study 2

As in Study 1, we began our analyses by running a between-subjects ANOVA with change in knowledge as the dependent variable and condition as the between-subjects variable and found a significant main effect of Condition F(9, 1866) = 2.088, p = 0.027, ηp2 = 0.01 (Fig. 4). To test our now pre-registered first hypothesis, that participants in the Generic-Normative Condition will change their beliefs in line with the source, therefore increasing in knowledge compared to the Control Condition, we conducted an independent sample t test and found that, as hypothesized, participants in the Generic-Normative Condition (M = 11.78, SD = 24.04) increased their knowledge more than participants in the Control Condition (M = 3.92, SD = 13.02) t(330) = 4.06, p < 0.001, Cohen’s d = 0.39, CI [3.86, 11.85], replicating the result we found in Study 1. Additionally, we conducted independent sample t tests assessing the differences in knowledge change between the Control Condition and all other conditions. This time, none of these conditions were significantly different from the Control Condition when adjusting the significance level for multiple comparisons (i.e., nine comparisons, significance threshold p < 0.0055) using the Bonferroni correction (statistics reported in Table 2). We note that the pattern of results is similar between the two studies, but the differences do not reach corrected statistical significance levels in Study 2, mainly because participants in the Control Condition now increased in knowledge from pretest to posttest.

Fig. 4
figure 4

Change (posttest minus pretest) in knowledge (belief in accurate information measured from 0 to 100, minus belief in conspiracy theories also measured from 0 to 100) for the target items, in each of the 10 between-subject conditions. Error bars represent ± 1 standard errors of the mean

Table 2 Study 2 statistics of nine independent sample t tests comparing all experimental conditions to the control condition

To investigate our second hypothesis, of a partisan bias in knowledge change in the form of an interaction between participant and source ideology, we ran a between-subjects ANOVA with change in knowledge as the dependent variable, condition and participant ideology (Democrats vs. Republicans) as the between-subject variable. We found a main effect of condition F(9, 1856) = 2.08, p = 0.027, ηp2 = 0.01, but not of participant ideology F(1, 1856) = 1.62, p = 0.20, ηp2 = 0.001, and no interaction between the two variables F(9, 1856) = 0.68, p = 0.724, ηp2 = 0.003 (Fig. 5). This suggests that Democrats and Republicans are similarly affected by COVID-19 information sources, replicating the finding in Study 1.

Fig. 5
figure 5

Change (posttest minus pretest) in knowledge by participant type (Democrats in Blue vs. Republicans in Red), in each of the 10 between-subjects conditions. Error bars represent ± 1 standard errors of the mean

Next, we wanted to explore the relation between vaccine knowledge and vaccination intention. Given that 13% of participants had already been vaccinated, we excluded them from the following analyses.

First, we ran a linear mixed model with intention to get vaccinated as the dependent variable, fitting pretest vaccine knowledge as well as participant ideology as fixed effects, and included by-participant random intercepts. We found a significant interaction between Democrats and Republicans (β = 0.10, SE = 0.04, t(1629) = 2.33, p < 0.019) in how their pretest knowledge predicted vaccination intention, such that this effect was stronger for Democrats (β = 0.75, SE = 0.02, t(1630) = 29.43, p < 0.001) than for Republicans (β = 0.61, SE = 0.02, t(1630) = 22.79, p < 0.001) (Fig. 6A).

Fig. 6
figure 6

Intention to get vaccinated against COVID-19 as predicted by vaccine knowledge at pretest (A) and by vaccine knowledge change (B), split by participant type (Democrats in Blue vs. Republicans in Red). Note that, knowledge change can be negative (in B) if participants decrease in knowledge from pretest to posttest. Shaded regions represent 95% confidence intervals

Given the result that more knowledge about the vaccine is associated with a stronger intention to get vaccinated, next, we wanted to assess whether integrating knowledge also results in an additionally stronger intention to get vaccinated. We ran a linear mixed model with intention to get vaccinated as the dependent variable, fitting vaccine knowledge change as well as participant ideology as fixed effects, and including by-participant random intercepts. We found that change in vaccine knowledge (i.e., knowledge accumulation) predicts intention to get vaccinated positively for Democrats (β = 0.15, SE = 0.05, t(1630) = 2.88, p = 0.003) and negatively for Republicans (β = − 0.14, SE = 0.05, t(1630) = − 2.76, p = 0.005), but we found no significant interaction between the two political ideologies (β = 0.06, SE = 0.07, t(1629) = 0.872, p = 0.384) (Fig. 6B).

Moreover, we wanted to assess whether knowledge accumulation predicts intention to get vaccinated, differently, depending on the source of the information.

We were interested in whether participants are more likely to get vaccinated if the source of the knowledge accumulation is a generic normative group, an ideologically consistent source (e.g., for Democrats: the group of Democrats, the anecdote of a Democrat, President Biden), an ideologically inconsistent source (e.g., for Democrats: the group of Republicans, the anecdote of a Republican, or President Trump), or a health expert. We ran a linear mixed model with intention to get vaccinated as the dependent variable, fitting change in knowledge as well as participant ideology, and the collapsed conditions as fixed effects, and included by-participant random intercepts. We found that knowledge change predicts vaccination intention only for Democratic participants in the health experts condition (β = 0.18, SE = 0.08, t(1622) = 2.10, p = 0.035) (Fig. 7). Even though, for simplicity, in this analysis, we collapsed the conditions in this manner, in Supplementary Materials, we include the extended mixed model in which we find that knowledge change predicts vaccination intention only for Democratic participants in the Doctor Fauci condition (β = 0.33, SE = 0.13, t(1612) = 2.51, p = 0.012) (Figure S1).

Fig. 7
figure 7

Vaccination intention predicted by change in knowledge (i.e., knowledge accumulation) for Democrats (in blue) and Republicans (in red) in the collapsed conditions: generic normative sources, Democratic sources, Republican sources, and health experts

Thus, using different informational content, we replicated the finding that normative cues that involve generic sources are most impactful at increasing scientific knowledge. We also replicated the finding that political ideology does not impact knowledge integration. In contrast to Study 1, we now found that no other source of information led to more knowledge accumulation compared to the control condition, given that participants in the control condition increased in knowledge from pretest to posttest, perhaps as a result of increased epistemic vigilance the second time they rated the statements [39]. This difference may also have been due to the difference in participants’ initial levels of knowledge between the two studies, as we found that participants in Study 2 were more knowledgeable at pretest than participants in Study 1. Further investigations are however necessary to pinpoint the specific mechanism leading to this difference.

In addition to Study 1, Study 2 included a measure of intention to get vaccinated against COVID-19. We found that initial vaccine knowledge is a strong predictor of vaccination intention for both Democrats and Republicans. We also found that the more information Democrats accumulated, the more likely they were to want to be vaccinated, but the more information Republicans accumulated, the less likely they were to want to be vaccinated. We speculate that Republicans’ lower initial level of knowledge about COVID-19 vaccines compared to Democrats’ may have contributed to this effect. When further investigating these findings, while also taking into account the source of the information, we found that information accumulation increases vaccination intention only for Democrats in the health experts condition. More specifically, they were most likely to increase in vaccination intention when Doctor Fauci was the source of their information accumulation.

4 Discussion

In two studies, we used an experimental horse-race design to assess which information source is most likely to lead to COVID-19 and COVID-19 vaccine knowledge incorporation. We found that individuals’ knowledge increases most when information is provided by a generic group of people. This finding aligns with seminal work showing that social norms have a meaningful impact on people’s attitudes and behaviors [24, 40], especially under uncertainty [27]. Here, we show that portraying information as normative (i.e., highly endorsed by others) increases people’s belief in its accuracy, and portraying information as counter normative (i.e., highly opposed by others) decreases people’s belief in its accuracy. However, several sources (e.g., the ideological groups, the health experts) emerged as impactful in Study 1, but not Study 2. The pattern looks similar between the two studies, when looking at the degree of each source’s impact, but an investigation into the context in which these sources have a weaker versus a stronger effect is certainly warranted.

Also, in both studies, we find that knowledge accumulation occurs similarly for Democrats and Republicans. This result diverges from previous studies which found that norms are most influential when they arise from others with whom people share a common identity [32, 33]. We also expected ideological differences in knowledge integration from congruent versus incongruent sources based on prior work showing that conservatives are more resistant to change than liberals [34, 35] and that Republicans are less concerned about COVID-19 than Democrats [41]. However, in the present work, we did not find ideological differences in knowledge integration, consistent with prior work in which Democrats and Republicans updated their beliefs similarly as a function of evidence [42]. Along the same lines, Pennycook and colleagues found that accurate beliefs about COVID-19 are associated with reasoning skills regardless of political ideology [9]. One possible explanation for the lack of ideological differences could be that when stakes are high, people might moderate their ideological biases. Another possibility is that ideological differences do not surface in a short-term context and only become apparent over longer periods of time as novel information integrates with preexisting ideological schemas. This possibility is consistent with prior work in the persuasion literature, such as the sleeper effect [43]. However, these speculative explanations should be empirically tested in future work.

The lack of an effect of political leaders on belief change is also in line with prior work showing that ideological messages are not effective when communicated by leaders [44]. Interestingly, we also found no effect of anecdotes on knowledge change. This is surprising, as prior work has already established the effectiveness of anecdotal evidence in persuasion [17]. One explanation for our null finding may be that the conditions of stress and uncertainty, caused by the COVID-19 pandemic, might have reduced the impact of single voices as persuasive sources. To confirm this assumption, one could programmatically manipulate perceived threat and observe the impact of anecdotal evidence on knowledge change, a direction we deem worthwhile pursuing.

When it comes to translating the knowledge accumulation to a concrete behavior (i.e., vaccination intention), we found that Democrats were most likely to increase their vaccination intention when Doctor Fauci was the source of their knowledge increase. This finding is consistent with prior work showing that credible sources have more influence on people’s beliefs [16,17,18] and intentions such as voting [21] and purchasing behavior [22, 23]. Here, we replicate this finding in the context of vaccination. This finding is very informative from a policy perspective, adding to the emerging literature on how social and behavioral science can be used to inform policy responses to the COVID-19 pandemic [45,46,47], as it suggests that short-term interventions to impact behavioral intentions involving knowledge have limited efficiency, even if based on ideologically congruent sources. For Republicans, other types of interventions (e.g., non-knowledge based) would need to be created and tested.

The present work also has meaningful implications for the field of data science and analytics in the context of COVID-19, complementing prior work [48] with experimental approaches. First, it quantifies the construct of belief and provides empirical evidence for a mechanism by which COVID-19-related belief change can be triggered. Second, the experiments reported here employ robust statistical modeling in testing the relative effectiveness at changing beliefs of various information sources, and in disentangling ideological influences on such effects. And third, this work incorporates predictive models of behavioral intentions (i.e., vaccination intention) across the sociopolitical spectrum.

A meaningful expansion of this work could involve the investigation of how information from different sources propagates through social networks [49]. Once these sources communicate information in real-world circumstances, people often communicate with one another and share this information [50, 51]. Thus, it would be important to understand how these conversations amplify the impact of the source, especially in homogeneous communities, given homophily characteristics. Critically, individual level effects have been found to propagate in social networks [52, 53], and social networks can amplify the spread of behaviors that are both harmful and beneficial during an epidemic [54]. Thus, tracking COVID-19 information propagation in fully mapped social networks would be critically important, especially given policymakers’ interests in impacting community-wide knowledge and behavior [55].

Finally, these findings might prove useful in the battle against misinformation, a prominent threat facing the world today [56]. Emerging research is using social science to understand and counter the spread of false information [57], which has been found to propagate faster and further than true information [58, 59]. One approach is refutation, or debunking [60, 61], which has been found to backfire in some contexts [62, 63]. Another approach is prebunking, or inoculating [64] by preemptively exposing people to small doses of misinformation techniques (including scenarios about COVID-19) which reduce susceptibility to fake news [65, 66]. A third approach is nudging accuracy which has been found to reduce belief in false news [67]. Here, we show that generic normativity cues are most successful at increasing knowledge across the ideological spectrum, and that expert communications are most successful at increasing Democrats’ vaccination intentions.

5 Materials and methods

5.1 Open science practices

The materials and data can be found on our open science framework page: https://osf.io/zcp3m.

The pre-registrations can be found here:

Study 1: https://aspredicted.org/blind.php?x=wg3aa5.

Study 2: https://aspredicted.org/kb4bc.pdf.

The data analysis (in Python) can be accessed as a jupyter notebook on Github: https://github.com/mvlasceanu/COVIDsource.

5.2 Participants

In Study 1, we aimed for a US census-matched sample of 1000 participants, half Democrats and half Republicans. This sample size was calculated based on a power analysis including an effect size of 0.4, a significance level of 0.05, and 80% power, for each of the independent sample comparisons between the Control and Experimental conditions. Using the Cloud Research platform, we recruited a US census-matched sample of 1387 Americans, expecting, based on prior studies, to exclude 25% of them based on pre-registered criteria (i.e., failed attention checks). And indeed, 327 participants failed our attention checks. We conducted statistical analyses on the final US census-matched sample of 1060 participants (57% female; Mage = 48.30, SDage = 16.89). This sample matched the US census quotas of age, gender, race, and ethnicity. The total sample contains 544 participants self-identified as Democrats and 516 self-identified as Republicans.

In Study 2, to increase the power of detecting potential ideological differences in the effect observed in Study 1, we now calculated the sample size based on the power analysis of each of the independent sample comparisons between Democratic and Republican participants in each condition, at an effect size of 0.4, a significance level of 0.05, and 80% power. Thus, we aimed for a sample of 2000 participants, which is the sample size we pre-registered. Using the Cloud Research platform, we recruited a sample of 2075 Americans. Of these, 199 were excluded based on pre-registered criteria (i.e., failed attention checks). We conducted statistical analyses on the final sample of 1876 participants (61% female; Mage = 49.24, SDage = 18.02). The total sample contains 911 participants self-identified as Democrats and 965 self-identified as Republicans.

The study protocol was approved by the Princeton University Institutional Review Board, and participants gave informed consent to participate in the study.

5.3 Stimulus materials

For Study 1, we undertook preliminary studies to develop a set of eight statements regarding COVID-19. A pilot study was conducted on a separate sample of 269 Cloud Research participants (Mage = 40.63, SDage = 15.49; 66% women) to select these statements from a larger initial set of 37 statements. For each of these statements, we collected believability ratings (i.e., “How accurate or inaccurate do you think this statement is” on a scale from 0-Extremely Inaccurate to 100-Extremely Accurate). The eight statements we selected (e.g., “The sudden loss of smell or taste is a symptom of being infected with COVID-19.”) were on average moderately endorsed (M = 53.03, SD = 21.57, on a 0 to 100-point scale), as we chose them to avoid ceiling and floor effects. Four of them were actually scientifically accurate (MAccurateBeliefs = 71.1, SD = 29.8) and four were conspiracies (MConspiracyBeliefs = 34.9, SD = 34.4), as concluded by published scientific papers and/or by the Centers for Disease Control, at the time of data collection.

For Study 2, we developed a set of eight statements regarding COVID-19 vaccines. Four of them were scientifically accurate and four were inaccurate, as concluded by published scientific papers and/or by the Centers for Disease Control, at the time of data collection.

5.4 Design and procedure

The data for Study 1 were collected between May 26, 2020 and June 4, 2020, and the data for Study 2 were collected between February 1st and February 2nd, 2021. Participants went through three experimental phases. They were told that they would participate in an experiment about people’s evaluation of information and were directed to the survey on the Qualtrics platform. After completing the informed consent form, participants were directed to the first phase (pretest), in which they rated a set of eight statements (one on each page) by indicating the degree to which they believed each statement (i.e., “How accurate do you think this statement is,” from 1-Extremely inaccurate to 100-Extremely accurate). Then, in the second phase, participants were randomly assigned to one of 10 between-subjects conditions. For each of the 10 conditions, participants were told the source of half (i.e., target items) of the initially rated statements was one of the following: a political leader (President Trump or President Biden), a health expert (Doctor Fauci or the CDC), an anecdote (of a Democrat or a Republican), or a group of prior participants (either Democrats, Republicans, or Generic non-ideological). Importantly, the source always endorsed the accurate information they mentioned (e.g., “This statement was part of a speech by President Trump”; 2 target items, counterbalanced with baseline items) and denied the conspiracies they mentioned (e.g., “This statement was refuted in a speech by President Trump”; 2 target items, counterbalanced with baseline items). Note that, for the normative conditions (i.e., involving supposed groups of prior participants), participants were instead told they would be able to see the average accuracy rating assigned to half (i.e., target items) of the initial statements by prior participants while qualifying their political ideology as either Republican (i.e., “You will now see the average accuracy assigned to some of these statements by the Republican participants who took this survey last week”; Normative Republican), Democratic (i.e., “You will now see the average accuracy assigned to some of these statements by the Democratic participants who took this survey last week”; Normative Democratic), or Generic (i.e., “You will now see the average accuracy assigned to some of these statements by the participants who took this survey last week”; Generic-Normative). Importantly, in each of these three conditions, the average ratings presented to participants for the target items were very high for accurate information (e.g., “95%”, “98%”) and very low for conspiracies (e.g., “5%”, “2%”). The non-target items, which consist of half of the items participants were presented with in the pretest phase, were considered baseline items. We note that the eight initial items were pseudo-randomly assigned to either a target or a baseline status across participants, such that the source supported two accurate beliefs and opposed two conspiracy beliefs. In the Control Condition, participants were not presented with any information at all, they only completed the pretest and posttest. In the third phase (posttest), participants rated again the believability of the initial eight statements, after which they were asked to complete a series of demographic information and were debriefed. In the debriefing phase, participants were told which of the statements were actually accurate and which were inaccurate. They were also informed the sources were assigned to the information for the purposes of the experiment.

The design and procedure in Study 2 were the same as in Study 1, with one exception—at the end of the study, we asked participants’ intention to get vaccinated against COVID-19.

5.5 Measures

Statement endorsement was measured at pretest and posttest with the question “How accurate or inaccurate do you think this statement is”, on a scale from 0-Extremely Inaccurate to 100-Extremely Accurate. We asked participants to indicate their age, gender, education, and political orientation.

In Study 2, we added a measure of intention to get vaccinated against COVID-19. First, participants were asked if they had already been vaccinated against COVID-19. If their answer was “no,” then the follow-up question appeared on their screen: “If you were offered the CDC currently recommended COVID-19 vaccine (Moderna or Pfizer) today, would you agree to get vaccinated?” which they had to answer on a scale from 0- “Absolutely not” to 100- “Absolutely yes.”

5.6 Analysis and coding

Participants’ knowledge about COVID-19 was operationalized and computed as the difference between their belief in the accurate information and the conspiracies (i.e., belief in accurate minus belief in inaccurate statements). This score was calculated separately for the target items (i.e., statements the source mentioned) and the baseline items (i.e., statements the source omitted). Knowledge change (or accumulation) was computed as participants’ knowledge at posttest minus the knowledge at pretest.