With the proliferation of social media as a primary information source, the prevalence of misinformation has markedly surged (Pennycook & Rand, 2021). Considering the detrimental effects of misinformation on democratic societies (Ecker et al., 2022), particularly within political contexts where it can manipulate voting behavior (Guess et al., 2020a) and exacerbate intergroup polarization and conflict (Bail et al., 2018), there is a need to elucidate the psychological underpinnings governing belief in fake news, the ability to discern truth, and inclination to disseminate fake news. The existing literature emphasizes that identity and reasoning are two critical factors in explaining why individuals are susceptible to believing fake news. Based on the prioritization of one factor over the other, two theoretical approaches emerge, each with alternative expectations regarding the effects of reflection on fake news assessments: Motivated reasoning and reflective reasoning.

Two competing approaches on the role of reflection in fake news

A common approach for explaining belief in fake news in relation to identity is the Motivated Reasoning Account (MRA). While MRA provides a more comprehensive framework for understanding psychological tendencies based on several factors such as beliefs, attitudes, and values (Kahan, 2015), it serves as the foundational theoretical account for studies in fake news by prioritizing identity as the central motivation (Pennycook & Rand, 2021). MRA suggests that an individual’s motivations or goals shape their reasoning and judgment processes (Kunda, 1990). Accordingly, information processing is influenced by the motivation to confirm or support pre-existing beliefs (Faragó et al., 2020; Kahan, 2015). More specifically, individuals tend to protect their identity and are inclined to readily believe content that aligns with their perspectives while approaching content contradicting their views with a high degree of skepticism (Kahan, 2013). Therefore, identity-protective cognition leads to selectively accepting evidence that confirms pre-existing beliefs while disregarding contradictory information (Kahan et al., 2007). This tendency is further amplified in contexts where individuals’ sense of identity and belonging is salient (Stets & Burke, 2000). When social identity becomes salient, the inclination toward ingroup bias intensifies, driven by the goal of protecting one’s identity. In addition, the MRA identifies the function of reflection as a factor that strengthens identity-protective tendencies. Since reflection equips individuals with cognitive skills such as more effortful thinking and deliberation, individuals become more sophisticated and able to protect their identities more effectively. Thus, the MRA predicts that individuals who identify with a particular political affiliation will, through reflective thinking, become more attached to their beliefs, increase their tendency to believe fake news that supports their political identity, and exhibit a strengthened tendency under conditions where political identity becomes more salient.

Another theoretical approach, the Reflective Reasoning Account (RRA), posits that reflective thinking inhibits automatic heuristic responses and facilitates questioning endorsed beliefs by prompting deliberation (Evans & Stanovich, 2013; Pennycook & Rand, 2021). Accordingly, cognitive skills provided by reflection prompt individuals to adopt a critical perspective (Evans, 2008), even in their worldviews (Bago et al., 2020). Since reflection enables individuals to scrutinize the available information, it facilitates a more accurate discernment of the truth (Pennycook & Rand, 2021; Pennycook, 2023). Therefore, the RRA predicts that reflection induces cognitive decoupling, leading to enhanced discernment of truth and decreased belief in fake news, including content aligning with individuals’ political identities.

Existing findings and limitations impeding a clear conclusion on the effects of reflection

Most past studies on fake news support the idea that reflection elicits cognitive decoupling effects, reducing belief in fake news and enhancing truth discernment (see Pennycook, for a review). However, they are inadequate in reaching a definitive conclusion due to methodological limitations. The existing literature predominantly employs the Cognitive Reflection Test (CRT; Frederick, 2005) as a measure of reflection, drawing inferences by unveiling a negative correlation between CRT scores and believing fake news (e.g., Pennycook & Rand, 2019a; Tandoc et al., 2021). CRT is a problem-solving test that distinguishes between intuition-based (Type I) and reflective (Type II) reasoning through items designed to elicit heuristic errors avoidable with accurate reasoning (Frederick, 2005). Although the CRT serves as an indicator of reflective thinking, apparently, it does not encompass all dimensions of thinking styles (Hertzog et al., 2018) and it is ambiguous which components of reflection it captures (Newton et al., 2024). For example, “need for cognition” and “open-minded thinking” differ in that the former reflects a preference for effortful thinking, while the latter involves critically evaluating beliefs and intuitions based on evidence (Cacioppo & Petty, 1982; Stanovich & West, 2007). Both are theoretically related to reflection, but their role within CRT measurements remains unclear (Erceg & Bubić, 2017). The CRT adopts a single-dimension perspective, assuming only one type of intuitive or reflective thinking style, whereas individuals actually vary across multiple dimensions of intuitive-reflective thinking styles (Newton et al., 2024). Therefore, CRT overlooks variations in thinking styles and potentially oversimplifies their complexity and impact (Bayrak et al., 2023; Newton et al., 2024). Moreover, various pathways can result in accurate responses on CRT besides reflective thinking. For example, early selection processes are crucial in solving the CRT, as most correct responders begin with the correct answer or a logical line of thought (Szaszi et al., 2017). Additionally, Bago and De Neys (2019) found that individuals who correctly answered the bat-and-ball problem of CRT after deliberation had already provided the correct answer in the initial response phase, which involved minimal deliberation. Similarly, Patel et al. (2019) showed that reflective thinking does not always ensure a correct response, even when the intuitive answer is unavailable, and the correct option is presented in a multiple-choice format. Although the CRT is considered a valid measure for its predictive power, existing studies on fake news using CRT do not experimentally manipulate cognitive style and thus cannot provide insights into how reflection causally affects assessments of fake news.

In an experimental study, Swami et al. (2014) argued that they found a decreasing effect on belief in conspiracy theories through reflection based on the priming techniques that includes scrambled-sentence tasks and processing disfluency. Although this study does not directly focus on fake news, it strengthens the expectation that this finding can be extended to fake news, as it shares similar psychological mechanisms with epistemically suspect beliefs (Pennycook, 2023). However, the effectiveness of their priming techniques in inducing reflective thinking remains controversial. For example, Meyer et al. (2015) attempted to replicate the effects of processing disfluency by pooling data from various experiments, finding that this task did not activate reflective thinking. Similarly, Deppe et al. (2015) also utilized a scrambled-sentence task to prompt reflective thinking, yet they observed no notable differences in cognitive reflection scores between the primed group and the control group. Furthermore, Većkalov et al. (2024) were recently unsuccessful in replicating Swami et al.’s (2014) findings using the same priming manipulations. In addition to challenges of replicating studies based on priming, the theoretical validity of priming as a mechanism—particularly its purported role in triggering reflective thinking—remains a subject of ongoing debate. Priming effects involve the automatic activation of mental representations without conscious awareness, leading to faster responses to related targets and influencing judgments and behaviors (Molden, 2014). However, reflective thinking may not be a process that can be triggered by such a mechanism. Reflection is a broad concept encompassing a controlled, high-effort deliberation process aimed at overriding and correcting initial intuitions (Evans & Stanovich, 2013). The process of correcting initial intuitions is more likely to be activated through practice-based training activities, where individuals are consciously persuaded and encouraged about the efficacy of reflection, rather than being unconsciously triggered by deliberation.

Although numerous experiments explored the effects of various factors on the perception of fake news, to our knowledge, only one focused on the causal effects of reflection. Bago et al. (2020) implemented a two-response paradigm to elicit reflective thinking and found that deliberation led individuals to assess fake news as less accurate. In this paradigm, participants first make decisions quickly under time pressure and then have a chance to reconsider their choices with more careful thought. The purpose is to produce one more intuitive decision and another more reflective one. However, it’s unclear how effective this technique is in stimulating intuition and reflection—specifically, whether the initial time pressure drives the variation between the two responses, the subsequent opportunity for deliberation, or a combination of both. To shed light on this question, Isler and Yilmaz (2023) analyzed responses made initially under time pressure and later after a delay, comparing them to responses in control conditions without time restrictions. They also investigated whether pairing the delayed response with a decision justification technique would amplify the manipulation’s effectiveness. They found that the within-subjects brief time-delay condition of the two-response technique, intended to activate reflection in Bago et al. (2020), does not activate reflection more than the baseline levels and acts as a control condition (Isler & Yilmaz, 2023). Thus, earlier findings based on this technique are questionable and make it crucial to investigate the causal relationship between cognitive reflection and belief in fake news using alternative, well-established methods for inducing reflection.

Activating reflection with debiasing training

Addressing the methodological limitations in the literature on experimentally inducing reflective thinking, Isler et al. (2020) developed a debiasing training technique based on successful previous laboratory experiments and established debiasing principles. This technique mainly includes training to increase awareness of three commonly observed cognitive biases: the semantic illusion, the base rate fallacy, and the availability bias (Isler et al., 2020). Following the training, it encourages participants to act aware of individuals’ susceptibility to these cognitive errors. Thus, it aims to enable the participants to overcome these underlying biases of intuitiveness when assessing an outcome during the experiment and foster more reflective decision-making processes. In line with this expectation, Isler et al. (2020) found that debiasing training significantly improves cognitive performance on the Cognitive Reflection Test-2 (Thomson & Oppenheimer, 2016). Furthermore, Isler and Yılmaz (2023) compared all the techniques commonly used in the literature and found that debiasing training was the most effective technique for eliciting reflective thinking.

Despite its well-established effectiveness in inducing reflection, no experiment has yet been conducted to apply this technique in the context of political fake news. Ideological worldviews can greatly impede debiasing efforts (Lewandowsky et al., 2012), and individuals often fail to recognize or correct their biases (Scopelliti et al., 2015). Therefore, training that raises awareness of cognitive biases and counters intuition-driven susceptibility may be particularly effective in addressing political misinformation. Based on the dual process models of reasoning, debiasing training techniques aim to teach individuals inferential rules (Lilienfeld et al., 2009). Dual-process models of reasoning suggest that individuals make intuitive judgments that can later be corrected through reflective reasoning (Evans & Stanovich, 2013) and debiasing training techniques focus on modifying these intuitions by encouraging attention to overlooked information (Hirt & Markman, 1995) and promoting reflective thinking. Therefore, given its theoretical foundation and content, debiasing training developed by Isler et al. (2020) is anticipated to be particularly effective in addressing issues related to political fake news. This potential arises from its incorporation of three key factors that can be critical in reducing belief in fake news. The first is a direct increase in deliberation resulting from the debiasing training (Isler & Yilmaz, 2023), which aligns with a key finding in the literature—the negative relationship between deliberation and belief in fake news (Pennycook & Rand, 2021). The second is that debiasing training targets reducing reflective responses by enhancing individuals’ metacognitive awareness. It encourages individuals to adopt an evaluative mindset when making assessments and improves metacognitive awareness by introducing their cognitive biases. In line with this, Salovich and Rapp (2021) found that individuals’ awareness of their likelihood of being affected by inaccurate information and having metacognitive awareness resulted in lower judgment errors, less falling for false assertions, and thus more resilience to inaccurate information. Similarly, Salovich et al. (2022) also found that acting with an evaluative mindset promotes reliance on accurate understandings and reduces the likelihood of being influenced by inaccurate statements. These findings suggest that the approach employed by debiasing training to enhance reflective thinking may increase resilience to fake news. The third is that debiasing training increases interest in truth and improves accuracy. Some previous studies showed that accuracy prompts (i.e., shifting attention to accuracy) increase the ability to detect misinformation (e.g., Pennycook & Rand, 2022; Martel et al., 2024). Therefore, the emphasis in the debiasing training on recognizing tendencies towards incorrect inferences and maintaining alertness for accuracy can be expected to reduce the belief in fake news by acting as a form of accuracy prompt. In the theoretical background as a whole, these three paths (i.e., increasing deliberation, increasing metacognitive awareness, and improving intuitions) are expected to improve individuals’ choices (Pennycook, 2023). Thus, in the current experiment, we test the effects of reflection on belief in fake news by employing debiasing training as a well validated manipulation techniques for enhancing reflective thinking.

The current study and derived hypotheses

We conducted a preregistered, high-powered experiment to address limitations in existing literature and to determine which of the two alternative approaches (i.e., motivated reasoning or reflective reasoning) has stronger empirical support. We invited an equal number of Democrats and Republicans to participate in the experiment through the pre-screening. We employed debiasing training to activate reflection and manipulated political identity saliency by priming political party affiliation. We presented participants with fake news that supported and contradicted their political identity. As outcomes, we measured accuracy ratings of fake news (i.e., the extent to which fake news is believed to be true), intention to share fake news (i.e., the extent to which fake news is desired to be shared on personal social media account), and truth discernment (i.e., the difference between accuracy ratings of fake news and true news).

Based on the theoretical expectations of the two alternative approaches (i.e., motivated reasoning and reflective reasoning), we derived six hypotheses and tested each one. First, we tested the shared expectations of both approaches in H1 and H2, predicting that individuals would be more inclined to believe fake news supporting their political identity and more likely to share this politically aligned fake news on their personal social media accounts. Next, we tested whether political identity salience, derived from the motivated reasoning approach—which posits that social identity plays a dominant and decisive role in fake news assessments—would amplify biased responses in accuracy ratings and intention to share fake news. Then, in H4, we tested opposing expectations of the two approaches regarding the effects of reflection on accuracy ratings. While the motivated reasoning approach suggests that reflection serves an identity-protective function, leading individuals to perceive fake news supporting their political identity as more accurate (H4a), the reflective reasoning approach predicts that reflection enhances overall news evaluation accuracy (H4b). Subsequently, in H5, we tested the same expectations of both approaches regarding the intention to share fake news. The motivated reasoning approach predicts that reflection strengthens the defense of one’s own views to protect identity, whereas the reflective reasoning approach suggests that reflection generally increases skepticism and reduces fake news sharing. Lastly, in H6, we tested the prediction of the reflective reasoning approach that reflection enhances overall accuracy by examining its effect on truth discernment scores, which indicates the ability to distinguish between fake and true news.

Thus, we tested the following preregistered hypotheses:

  • H1: Both Democrats and Republicans evaluate fake news that supports their own political identity as more accurate than fake news that is against their political identity.

  • H2: Both Democrats and Republicans indicate more intention to share fake news that supports their own political identity than fake news that is against their political identity.

  • H3: The effects identified in H1 and H2 are stronger for those whose political party affiliation becomes experimentally salient compared to the control condition.

  • H4: In H4, we tested two competing hypotheses against each other regarding the role of reflection on accuracy ratings. H4a: Reflection will lead to higher accuracy ratings for fake news supporting one’s own political identity compared to fake news opposing it (i.e., motivated reasoning). H4b: Reflection will lead to lower accuracy ratings for fake news in general (i.e., reflective reasoning).

  • H5: We tested the same predictions of two competing hypotheses on the intention to share fake news. H5a: Reflection will lead to higher intention to share fake news supporting one’s own political identity compared to fake news opposing it (i.e., motivated reasoning). H5b: Reflection will lead to lower intention to share fake news in general (i.e., reflective reasoning).

  • H6: Reflection will lead to higher truth discernment scores for both Democrats and Republicans.

Method

We preregistered all materials, analyses, and hypotheses before data collection. The preregistration form can be seen at https://osf.io/6fa82/?view_only=b9a7cf7ec95b4570bd01f7fc4eb14c5d.

We executed all protocols following relevant laws and institutional protocols and received ethical approval from the Human Research Ethics Committee of Kadir Has University (Report number: 02.01.2023–50858). We obtained informed consent from all participants before the experiment.

Participants

Using G*Power software (Faul et al., 2009), assuming a two-tailed p-value of 0.05, small effect size as f = 0.10, and power as 0.90, we calculated the required sample to be at least 1053 to detect statistically significant results. Based on the pre-screening in the Prolific panel, we invited participants to our study, half of whom had previously identified themselves as Republicans and the other half as Democrats. We reached a sample of 1061 participants from the United States. In line with our preregistration, we checked pre-screening information with up-to-date answers in the demographic form and excluded participants currently identifying themselves as independent (n = 17), others (n = 2), and none (n = 2). As a result, we conducted analyses with a total sample of 1040 participants (Mage = 44.07, SD = 15.24; 509 female), consisting of 525 Democrats (48.5%) and 515 Republicans (49.5%).

Materials

Cognitive style

In the reflection condition, we asked participants to complete the debiasing training developed by Isler et al. (2020). In this training, participants were initially tasked with responding to three questions about prevalent cognitive biases: semantic illusion, base rate fallacy, and availability bias. A semantic illusion is a superficial misinterpretation of the meaning of a sentence or phrase. Questions designed to detect this bias contain a subtle error intended to provoke an intuitive yet incorrect response. In the debiasing training, an example of a semantic illusion is the well-known Moses Illusion, where participants are often misled into believing that Moses took two of each animal onto the Ark (when it was actually Noah). The base rate fallacy is a cognitive bias in which individuals ignore general statistical information (base rates) and instead focus on specific information when assessing the probability of an event, leading to erroneous conclusions. In the debiasing training, the Lawyer-Engineer problem was used as an example: Participants were asked whether Jack, who enjoys science fiction and programming, is more likely to be a lawyer or an engineer, despite there being 995 lawyers and only 5 engineers in the study. The correct answer is “lawyer,” but the specific details often lead to the wrong conclusion. Availability bias occurs when individuals assess the probability of an event based on how easily examples come to mind rather than on actual statistical data. In debiasing training, an example involves asking whether sharks or horses cause more human deaths. Although horses are responsible for more deaths, many incorrectly choose sharks due to their more memorable and fear-inducing image in media and popular culture. Following each of these three questions, participants were provided with accurate answers accompanied by explanatory content following their responses. Subsequently, participants were asked to encapsulate the key insights gleaned from the training in four sentences. Following this task, as the main idea of the training, the participants were reminded that individuals’ judgments are open to many biases and, therefore, it is important to pause and reconsider immediate reactions before making a decision. Then, participants were instructed to rely on reflection in the following stages of the study. In the control condition, participants were asked to describe an object nearby or in their possession using four sentences, as in Isler et al. (2020). This form of active control condition is implemented to mitigate potential influences that the writing task in the debiasing training condition might exert on cognitive performance (Isler et al., 2020; Isler & Yilmaz, 2023). The complete ready-to-use version of the debiasing training and control condition can be accessed from https://osf.io/csbe4/?view_only=b6101dd6822641e6b617bd89eb19fe1c.

Political identity

We established two distinct political identity conditions through pre-selection procedures: Democrats and Republicans. We invited an equal number of individuals previously identified as Democrats and Republicans to participate in the experiment via the Prolific online data collection panel. Prolific had previously determined the political identities of participants based on their responses to the following question: “Generally speaking, which of the following two political party identities do you feel closer to?”.

Political identity saliency

We manipulated the salience of the political identity to create two distinct conditions: Salient identity and control. In the salient identity condition, we presented participants with a question concerning their political identity at the beginning of the experiment. Utilizing a forced-choice response format, they expressed their political identity by selecting an option featuring symbols associated with political affiliations (i.e., donkey and elephant). Conversely, in the control condition, participants responded to this question at the end of the experiment after measuring dependent variables.

News assessments

We utilized a news stimuli pool from Pennycook et al. (2021a), encompassing both political fake news and true news items. This stimulus was developed through a pilot study to eliminate many confounding factors and to allow future studies to select news from the pool according to their specific needs. The pilot study gathered a large pool of fake and true news stories from various confirmation platforms (e.g., snopes.com, factcheck.org) and mainstream media sources (e.g., The New York Times and The Washington Post). Then, participants who were quota-matched to reflect the national demographic characteristics of the U.S. were invited to take part in the research. They were asked to assess the political alignment of the news items and evaluate various factors that could be confounding, including likelihood, sensationalism, informativeness, surprise, impact, familiarity, and partisanship. Finally, participants were asked about their own political views (e.g., political party affiliation, ideological orientation). As a result, each news item was categorized according to the political view it supported, and an index was created to form balanced news sets. This index was based on the differences between the scales’ midpoints and the sample’s baseline scores across various confounding factors. In the current experiment, we formed half pro-democrat and half pro-republican news groups. Given that news content within one group may exhibit extreme left-wing partisanship while content in the other group may manifest an extreme right-wing stance (and vice versa), we formulated news categories characterized by balanced partisanship scores for each political group. Both groups of true news used in our study had an average balance score of 0.753. Similarly, the partisanship scores of fake news were balanced with each other, with an average score of 0.731. The document detailing the baseline scores of the news items, across all factors, can be found here: https://osf.io/csbe4/?view_only=b6101dd6822641e6b617bd89eb19fe1c. We presented participants with 20 news headlines; 10 were true, and 10 were fake news. Five stories of each type were pro-democrat and pro-republican. Since Democrats and Republicans differ on various political topics, the news items included a diversity of themes to capture this variation and account for the possibility that individuals within each political group may prioritize different issues. The headlines were presented in a picture format, with a headline, byline, and source. The headlines were displayed in random order (see Table 1).

Table 1 Contents of headlines in news stimuli

Participants were tasked with responding to the following questions for each news item:

  1. 1)

    “To the best of your knowledge, is the claim in the above headline accurate?” (Response options: 1 = Not at all accurate; 2 = Not very accurate; 3 = Somewhat accurate; 4 = Very accurate).

  2. 2)

    “Would you consider sharing this headline on social media (for example, through Facebook or Twitter)?” (Response options: 1 = I definitely would not share; 2 = I would not share; 3 = I would share; 4 = I definitely would share).

As a result, three different scores were generated: accuracy ratings, intention to share news, and truth discernment. We computed truth discernment scores by subtracting the average accuracy ratings of fake news from the average accuracy ratings of true news, as suggested by Pennycook et al. (2021a).

Demographic form

We used a demographic form asking participants about their age, sex (response options: Female, Male, Non-binary/Third Gender, Prefer not to say), education level, social media usage (i.e., whether they have social media accounts), socioeconomic status, social, economic and general ideology questions (1 = Very liberal, 7 = Very conservative), and religiosity (1 = Not religious at all, 7 = Very religious).

Results

We conducted a series of mixed-design ANOVA to test the main and interaction effects of cognitive style, political identity, and political identity saliency on the dependent measures. We also added partisanship of news (i.e., pro-democrat or pro-republican) to the model as within-subject factors.Footnote 1 Thus, the general design of our experiment consisted of a 2 (Political Identity: Democrat or Republican) x 2 (Political Identity Saliency: Salient or Control) x 2 (Cognitive Style: Reflection or Control) x 2 (Partisanship of Fake News: Pro-democrat and Pro-republican) mixed-design factorial ANOVA, where the latter factor was within-subjects, on news assessments. For each preregistered hypothesis, we reported the analysis results with relevant measures.Footnote 2 The descriptive statistics for the dependent variables, categorized based on the experiment conditions, are presented in Table 2.

Table 2 Descriptives for dependent variables categorized based on the experiment conditions

We utilized CRT measurements for the manipulation check of cognitive style manipulation and the results revealed a significant difference in manipulation check scores between the control (M = 1.64, SD = 1.26) and reflection (M = 1.82, SD = 1.20) conditions; t(1038) = −2.33, p =.020, indicating that the cognitive style manipulation was effective. We also used ingroup identification measurement to check the manipulation of political identity saliency. However, there was no statistically significant difference in manipulation check scores between the control (M = 5.33, SD = 1.26) and salient identity (M = 5.28, SD = 1.28) conditions; t(1038) = 0.705, p =.481, suggesting that the political identity saliency manipulation was not effective.

To test H1, we examined whether both political identity groups evaluate fake news supporting their political identity as more accurate than news contradicting their political identity. We conducted a 2 (Political Identity: Democrat or Republican) x 2 (Partisanship of Fake News: Pro-democrat and Pro-republican) mixed-design ANOVA, where the latter factor was within-subjects on accuracy ratings. There was a significant interaction between political identity and partisanship of fake news, F(1, 1038) = 224.3, p <.001, η2p = 0.18 (see Fig. 1).

Fig. 1
figure 1

The interaction effect of political identity and partisanship of news on accuracy ratings. Note. The score indicators on the Y-axis, representing accuracy rating levels, were truncated in 0.2 increments

Post hoc comparisons using the Bonferroni test showed that Democrats rated pro-democratic fake news more accurate than pro-republican fake news (Mdifference = 0.12, SE = 0.02, t = 5.26, pbonferroni < 0.001). Similarly, Republicans also rated pro-republican fake news more accurate than pro-democratic fake news (Mdifference = 0.36, SE = 0.02, t = 15.87, pbonferroni < 0.001). These findings suggest that participants of each political identity evaluate fake news favorable to their identity as more accurate (see Table 3). Thus, H1 was supported.

Table 3 Post hoc comparisons - partisanship of fake news ✻ political identity on accuracy ratings of fake news

To test H2, we examined whether both political identity groups display more intention to share fake news that supports their political identity than news that contradicts their political identity. We conducted a 2 (Political Identity: Democrat or Republican) x 2 (Partisanship of Fake News: Pro-democrat and Pro-republican) mixed-design ANOVA, where the latter factor was within-subjects, on intention to share fake news. There was a significant interaction between political identity and partisanship of fake news, F(1, 1038) = 161.4, p <.001, η2p = 0.14 (see Fig. 2).

Fig. 2
figure 2

The interaction effect of political identity and partisanship of news on intention to share fake news. Note. The score indicators on the Y-axis, representing intention to share levels, were truncated in 0.1 increments

Post hoc comparisons using the Bonferroni test showed that Democrats had a higher intention to share fake pro-democratic news than fake pro-republican news (Mdifference = 0.06, SE = 0.02, t = 3.16, pbonferroni = 0.010). Similarly, Republicans also had a higher intention to share fake pro-republican news than fake pro-democratic news (Mdifference = 0.26, SE = 0.02, t = 14.75, pbonferroni < 0.001). These findings suggest that both Democrats and Republicans display more intention to share fake news that supports their own political identity than fake news that is against their political identity (see Table 4). Thus, H2 was supported.

Table 4 Post hoc comparisons - partisanship of fake news ✻ political identity on intention to share fake news

To test H3, we examined whether the effects identified in H1 and H2 were stronger for those whose political identity was made salient than the control condition. We conducted two 2 (Political Identity: Democrat or Republican) x 2 (Political Identity Saliency: Salient or Control) x 2 (Partisanship of Fake News: Pro-democrat and Pro-republican) mixed-design ANOVAs, where the latter factor was within-subjects, on accuracy ratings and intention to share fake news. However, there were no significant main effects of political identity saliency on accuracy ratings, F(1, 1036) = 0.00, p =.988, η2p < 0.001 and intention to share fake news, F(1, 1036) = 0.53, p =.467, η2p = 0.001. In addition, there were no significant interaction effects between political identity saliency, political identity, and partisanship of fake news on both accuracy ratings (F(1, 1036) = 0.80, p =.373, η2p = 0.001) and intention to share (F(1, 1036) = 0.31, p =.575, η2p < 0.001). Thus, H3 did not receive empirical support.

To test H4, we examined whether reflection increases bias (i.e., expectation of H4a for motivated reasoning) or accuracy (i.e., expectation of H4b for cognitive decoupling) on accuracy ratings. We tested if the biased accuracy ratings of political groups identified in H1 differ between the reflection and control conditions. We conducted a 2 (Political Identity: Democrat or Republican) x 2 (Cognitive Style: Reflection or Control) x 2 (Partisanship of Fake News: Pro-democrat and Pro-republican) mixed-design ANOVA, where the latter factor was within-subjects, on accuracy ratings. There was no statistically significant main effect of cognitive style (F(1, 1036) = 1.14, p =.285, η2p = 0.001) and interaction effect between cognitive style and political identity (F(1, 1036) = 1.71, p =.191, η2p = 0.002). In addition, there were no statistically significant interaction effects between cognitive style, political identity, and partisanship of fake news, F(1, 1036) = 3.02, p =.082, η2p = 0.003. We also tested whether the biased accuracy ratings found in H1 change significantly with reflection, but the results showed no significant effect neither for Democrats (F(1, 523) = 0.25, p =.617, η2p < 0.001) nor for Republicans (F(1, 513) = 3.33, p =.069, η2p = 0.006). Thus, H4 did not receive empirical support.

To test H5, we examined the same competing hypotheses (i.e., cognitive decoupling and motivated reasoning) on the intention to share fake news. We tested if the biased intention to share fake news of political groups identified in H2 differs between the reflection and control conditions. We conducted a 2 (Political Identity: Democrat or Republican) x 2 (Cognitive Style: Reflection or Control) x 2 (Partisanship of Fake News: Pro-democrat and Pro-republican) mixed-design ANOVA, where the latter factor was within-subjects, on intention to share fake news scores. Cognitive style did not indicate a statistically significant main effect, F(1, 1036) = 2.51, p =.114, η2p = 0.002. There were also no statistically significant interaction effects between cognitive style, political identity, and partisanship of fake news on intention to share fake news, F(1, 1036) = 0.68, p =.410, η2p = 0.001. However, there was a significant interaction effect of cognitive style and political identity on intention to share fake news, F(1, 1036) = 5.19, p =.023, η2p = 0.005 (see Fig. 3).

Fig. 3
figure 3

The interaction effect of cognitive style and political identity on intention to share fake news. Note. The score indicators on the Y-axis, representing intention to share levels, were truncated in 0.1 increments

Post hoc comparisons using Bonferroni test showed that Democrats in the reflection condition displayed less intention to share fake news compared to the control condition (Mdifference = − 0.10, SE = 0.04, t = −2.74, pbonferroni = 0.037). However, there was no significant difference in the intention to share fake news of Republicans between the reflection and the control conditions (Mdifference = − 0.02, SE = 0.04, t = − 0.49, pbonferroni = 0.001). Additionally, in the reflection condition, Democrats also exhibited lower levels of intention to share fake news compared to Republicans (Mdifference = − 0.18, SE = 0.04, t = −4.58, pbonferroni < 0.001). However, in the control condition, there was no significant difference in the intention to share fake news between Democrats and Republicans (Mdifference = − 0.05, SE = 0.04, t = −1.49, pbonferroni = 0.818). Thus, H5 received partial support from the empirical, as Reflection reduced sharing fake news Democrats but not among Republicans.

To test H6, we examined the effects of reflection on truth discernment scores, which indicate the difference between the average accuracy ratings of fake news and true news. We conducted a 2 (Political Identity: Democrat or Republican) x 2 (Cognitive Style: Reflection or Control) between-subjects ANOVA on truth discernment scores. Cognitive style did not show a statistically significant main effect, F(1, 1036) = 0.46, p =.500, η2p < 0.001. There was a statistically significant main effect of political identity, F(1, 1036) = 64.481, p <.001, η2p = 0.059. Democrats exhibited higher levels of truth discernment scores compared to Republicans (Mdifference = 0.32, SE = 0.04, t = 8.03, pbonferroni < 0.001). However, there was no significant interaction effect between political identity and cognitive style; the difference in truth discernment scores between political groups did not vary between reflection and control conditions, F(1, 1036) = 0.57, p =.452, η2p = 0.001. Thus, H6 did not receive empirical support.

Discussion

The present study demonstrates that both the Democrats and Republicans are inclined to believe politically aligned fake news and exhibit a heightened intention to share such misinformation on social media. However, there was a discrepancy in the ability of the two political groups to discern between true and fake news, with Democrats demonstrating a greater performance in truth discernment. Additionally, although reflection did not notably impact accuracy ratings or truth discernment, it revealed varying effects on political groups’ general inclinations to share fake news. Accordingly, while reflection led to a decrease in Democrats’ intention to share fake news, it did not yield a similar outcome among Republicans. We tested several hypotheses derived from the expectations of two different theoretical approaches (i.e., motivated vs. reflective reasoning). The results of the analysis showed that H1 and H2, which predicted that social identity is a factor that leads individuals to believe and share fake news that support their own identity more in the process of assessments of political fake news, were empirically supported. On the other hand, H3, which predicts that political identity salience will increase the level of bias identified in h1 and h2, was not supported by the data. In H4, we tested conflicting predictions of the two alternative theoretical approaches regarding the effects of reflection. While motivated reasoning approach expected that reflection would function in an identity-protective role, resulting in individuals being more likely to believe fake news that supports their own views (H4a), the reflective reasoning approach expected that reflection would increase overall accuracy (H4b). Since reflection had no significant effect on accuracy ratings, neither of the two expectations under H4 received empirical support. However, we found partial support for the reflective reasoning approach in the H5 test, which addresses the expectations of two theoretical approaches regarding the effects of reflection on fake news sharing. Reflection reduced sharing fake news among Democrats but not among Republicans. Finally, in H6, we tested the effects of reflection on truth discernment. Since there was no statistically significant effects of reflection on truth discernment scores, H6 was not supported. The present study highlights that the causal association between reflection and assessments of fake news is not as straightforward as previously assumed, suggesting a more comprehensive examination of the phenomenon.

Theoretical implications

Our findings align with the patterns identified in previous literature regarding the similarities and differences between Democrats and Republicans on the assessments of fake news (e.g., Pennycook & Rand, 2019a). Consistent with previous studies (see Baptista & Gradim, 2022 for a review), we found that both political groups tend to believe and share fake news aligning with their political views. This finding underscores the essential role of political identity in shaping individuals’ fake news beliefs and sharing behaviors. We also found that Democrats can distinguish between true and fake news better than Republicans. This difference is also consistent with previous studies (e.g., Dobbs et al., 2023; Pennycook & Rand, 2019a). It may originate from the fact that Democrats typically demonstrate greater reflective thinking than Republicans (e.g., Pennycook & Rand, 2019b). Thus, Democrats may possess a better ability to discern between fake and true news owing to their higher reflective thinking levels.

The current study presents an original contribution to the literature by testing, for the first time, the causal effects of reflection on belief in fake news, employing a reliable reflection manipulation technique—debiasing training—which is the most effective technique identified thus far that directly activates reflection (Isler & Yilmaz, 2023). However, using this empirically established technique, we were unable to replicate the previous findings on belief in fake news, which relied on questionable techniques such as the two-response paradigm (e.g., Bago et al., 2020). Similarly, we failed to extend the mitigating effect of reflection, found in studies based on weak manipulation methods such as priming on epistemically suspect beliefs like conspiracy theories (e.g., Swami et al. 2014), to belief in fake news, which shares a similar psychological background. Hence, the present high-powered experiment suggests that the expectations of RRA lack robust empirical support.

One rationale could be that reflective thinking may not directly influence belief in fake news or truth discernment skills, as implied in previous studies that are primarily based on correlational findings. Most previous studies have inferred reflective thinking indirectly through CRT measurements, generalizing the correlation between high CRT scores and low fake news beliefs as evidence of a causal effect of reflection. However, CRT does not directly equate to reflective thinking, and because cognitive style was not experimentally manipulated in these studies, causal inferences cannot be drawn. Therefore, rather than a straightforward linear causal relationship, a broader spectrum of cognitive style differences—including analytical thinking within a more intricate network of relationships—may better explain belief in fake news. In a recent large representative sample, Čavojová et al. (2024) found no significant predictive role of reflective thinking on belief in fake news or willingness to share it. Studies also have shown that some factors related to reflective thinking are associated with belief in fake news, such as critical thinking (Lutzke et al., 2019), actively open-mindedness (Saltor et al., 2023), and need for cognition (Faragó et al., 2024). Thus, reflective thinking may influence the ability to differentiate between fake and true news through the mediating role of these related factors. Future research could explore this possibility, particularly by employing mediator model analyses within an experimental framework (see Pirlott & MacKinnon, 2016 for a practical guide to manipulation of mediator design).

Another possible explanation could be that the specific cognitive biases addressed in the debiasing training may not fully capture the range of biases influencing fake news beliefs. A more comprehensive training that includes a broader set of cognitive biases might, therefore, have a greater impact on reducing belief in fake news. Debiasing training targets common cognitive biases such as semantic illusion, base rate fallacy, and availability bias. However, the cognitive biases that arise from intuitive thinking are varied, and other types of biases might be more closely linked to the ability to distinguish between fake and true news. For example, the false consensus effect is the bias that individuals overestimate the extent to which others share their beliefs, attitudes, and behaviors, leading individuals to assume their views are more common than they actually are. Similarly, blind-spot bias refers to the tendency for individuals to recognize cognitive biases in others while failing to see those same biases in themselves, often being more adept at identifying errors in others’ thinking but overlooking their own. The third-person effect is another bias, where individuals believe that others are more influenced by media messages than themselves, underestimating their vulnerability to media persuasion, propaganda, or misinformation. Given that individuals often consider the outgroup when evaluating fake news and tend to act with overconfidence, inoculating them against these cognitive errors could enhance their truth discernment skills. Consequently, future research could explore how interventions that boost reflective thinking through training on various cognitive bias types might influence levels of belief in fake news.

An alternative approach to improving accuracy in assessing fake news could involve practices specifically targeting misinformation rather than solely focusing on reflection-enhancing training. Debiasing training helps prevent cognitive biases by encouraging individuals to use reflective thinking rather than relying on intuition in their decision-making processes. As a result, it indirectly aims to improve accuracy across various contexts, including conspiracy beliefs, pseudoscientific beliefs, and fake news. Therefore, a limitation of debiasing training is that it does not include contextualized material specific to fake news. This may explain why the main effects of debiasing training were not significant in this study. On the other hand, digital literacy interventions may be more effective in enhancing the ability to distinguish between fake and true news by providing individuals with targeted applications specific to the context of fake news. These interventions can be divided into three categories: nudges, boosts, and refutation (Alon et al., 2024). For example, a nudging intervention that uses accuracy prompts in social media to motivate individuals to pay attention to the news they encounter reduces the sharing of fake news (e.g., Pennycook et al., 2021a, b). In another approach, boosting, individuals are equipped with the skills to identify fake information (e.g., recognizing the news source, professionalism, and political motivation) and become more competent in misinformation, which has been reported to reduce belief in and intention to share fake news (Guess et al., 2020b; Lutzke et al., 2019). The other method, refutation interventions, aims to correct false beliefs by delivering fact-based information and clarifying why misinformation is misleading, with a focus on calibrating beliefs. This type of intervention has also been shown to diminish the persuasiveness of misinformation by warning about its presence and correcting specific false claims. (e.g., Roozenbeek et al., 2022).

On the other hand, we found that reflection reduced the intention to share fake news for Democrats, whereas it did not have the same effect for Republicans. This difference may be attributed to the distinct epistemic norms political groups hold. While liberals emphasize reasoning more within their belief systems, conservatives lean more toward intuition and authority (Baron, 2020; Metz et al., 2018). Therefore, while reflection may be functional for Democrats as it aligns with their epistemic norms, it may not significantly affect Republicans. Previous studies reinforce this explanation in related contexts. For instance, Yilmaz and Isler (2019) found reflection altered the religious views of non-believers but not believers. Similarly, Pennycook et al. (2020) found actively open-minded thinking, a key reflection component, more predictive for liberals than conservatives regarding epistemically suspect beliefs. Future research should focus on delineating the specific epistemic norms that distinguish Democrats and Republicans, aiming to elucidate the nature of the causal association between reflection and belief in fake news.

One possible explanation for the varying effect of reflection on belief in fake news and intention to share fake news could be that different processes may drive these outcomes. Individuals may share fake news even when they recognize its lack of truthfulness (Pennycook et al., 2021b). This could be due to the fast nature of social media, which emphasizes quick actions like sharing and liking (Kozyreva et al., 2020), potentially overriding concerns about accuracy (Pennycook & Rand, 2021). Thus, reflection might prove more effective in preventing these accuracy-undermining social media processes. In addition, possible differences between Democrats and Republicans could be evaluated through their levels of trust in institutions and the media, especially in the context of fake news. Since our study does not directly measure differences in reflection levels between political groups as an individual difference factor, nor does it include a direct measure of trust in institutions, any interpretation of this issue would be speculative. Future research should incorporate representative samples and assess whether these individual difference variables can explain the variations in reflection between Democrats and Republicans. Additionally, examining how institutional trust might influence the effects of reflection across political groups, especially in the context of misinformation, could offer valuable insights. We propose that future studies aim to isolate the effects of reflection from these potential confounding factors to provide a more comprehensive understanding of the interplay between reflection and belief in fake news.

It is worth highlighting that although the descriptive statistics reveal generally low mean scores across experimental conditions, with median and mode values reflecting the lower end of the scale, the validity of our group comparisons remains robust. Similar floor effects across all groups suggest that these conditions did not introduce systematic bias. This consistency across conditions ensures that the observed significant differences between groups are reliable despite the overall low scores. The uniformity in floor effects indicates that participants were generally cautious about believing and sharing fake news, which aligns with existing research on skepticism toward social media content (e.g., Čavojová et al., 2024; Pennycook, 2023). Our findings support the notion that while skepticism is prevalent, its impact on discerning fake from true information can vary.

Practical implications

The findings of our study also offer insights into the psychological factors that should be targeted in intervention strategies designed to reduce misinformation within the community. In the literature, debunking is a widely used method designed to protect individuals from the harmful effects of fake news and reduce belief in false information. This method seeks to decrease belief in fake news by providing individuals with evidence-based information about the false content they have previously encountered. On the other hand, debunking methods appear inadequate, as they rely on human fact-checking, which struggles to keep up with the widespread nature of fake news, and they are often ineffective at reducing belief in false information, sometimes even leading to backfire effects. On the other hand, as another method, pre-bunking seeks to train individuals against fake news by exposing them to misinformation before they encounter it. Studies on developing individuals’ media literacy and inoculation against fake news suggest that pre-bunking methods can be effective. For instance, Qian et al. (2023) found that enhancing media literacy encourages individuals to utilize reverse searches (see also Hameleers, 2023). In a study comparing the effects of media literacy and psychological inoculation methods, Kuru (2024) found that both interventions effectively protect against the impact of misinformation, with inoculation proving to be more effective than the literacy intervention. Although reflection, activated through debiasing training, shows significant effects only among Democrats in our study, it remains a promising factor that could potentially be utilized in inoculation interventions to raise awareness of cognitive biases. Future studies should focus on exploring the potential effects of context-specific debiasing training methods that activate reflection within pre-bunking interventions.

Limitations and recommendations for future studies

The current study also has some limitations. Identity saliency manipulation yielded no significant effects compared to the control condition. Although it is essentially known that priming-based techniques such as this manipulation may result in weak effects (Sherman & Rivers, 2021), we had to use this technique in our study for two rationales. First, we selected a widely used technique from the literature because our study focused on reflection and testing a newly developed identity salience manipulation that was beyond its primary scope. For this reason, we incorporated a manipulation method frequently used in studies based on the social identity approach (Diamond, 2020) into our research design. The second reason is that the overall length of the current experiment exceeds the typical duration of most studies, so we opted for a technique that participants could complete in a short amount of time. Our cognitive style manipulation required training materials with exercises, making it more time-consuming and demanding than traditional methods. In addition, participants also evaluated 20 news items—both true and fake, and pro-Republican and pro-Democrat—across two questions after the manipulations, extending the overall response time. We chose a less time-consuming method for identity salience to reduce the risk of decay effects from prolonged study duration with multiple manipulations. This approach allowed us to focus more on the reflection manipulation, a crucial and underexplored gap in the literature. Although this manipulation revealed significant effects in previous studies focusing on different topics (e.g., Morris et al., 2008), a study using a similar priming technique found no significant effect of manipulating identity salience on belief in fake news (Wischnewski & Krämer, 2020). Given this limitation, we recommend that future research use more robust techniques to investigate the effects of identity salience, which may lead to a clearer understanding. In addition, experiments may first investigate cognitive style and identity salience factors in greater detail through separate studies, and thus lengthy experiment durations can be overcome. Subsequent experiments can explore specific interactions based on the identified patterns. Another limitation of the present study is that the fake news content was solely distinguished based on its alignment with or opposition to political identity groups, neglecting other potential content-related factors that might impact responses. In the realm of various intergroup themes, fake news content holds the potential to evoke disparate effects beyond simply supporting or opposing a particular group. For instance, reflection may yield distinct effects when individuals are presented with fake news that either derogates the outgroup or glorifies the ingroup (Çoksan & Yilmaz, 2023). Future studies should examine fake news that both supports and opposes identity groups, considering the theme of fake news. Another limitation of our study is that the methodology used to assess fake news beliefs does not fully capture truthfulness checking or skepticism based on actual behavior. Although this method includes a well-established and rigorously prepared stimuli pool, it is limited to identifying individuals who are more likely to believe misinformation without further investigation and those who are more skeptical without assessing real behaviors such as fact-checking or verifying the accuracy of information. Future research should address this limitation by incorporating designs that assess participants’ willingness and ability to verify the truthfulness of headlines through additional effort. Such an approach could provide clearer insights into how cognitive and behavioral processes interact in the context of misinformation, offering a better understanding of how people assess information accuracy in real-world settings.

Finally, a methodological limitation of our experiment is the potential for a demand effect arising from conducting outcome measurements immediately after debiasing training without incorporating any distractors or fillers. As an experimental procedure, distractor or filler items could have been included after manipulation as a way to mitigate potential demand effects. However, we opted to proceed directly to outcome measures following the debiasing training, considering that effects in social psychology are typically small to moderate (Bardi & Zentner, 2017) and manipulations often rely on effects that disappear quickly over time (Bless & Burger, 2016). In our experiment, participants assessed numerous news items over an extended period, which heightened the risk of manipulation effects fading and increased the likelihood of Type I errors. In this context, the use of distractors or fillers may risk undermining the effects of the manipulation on outcomes. Moreover, using distractor/filler items carries the risk of interaction with manipulation effects, potentially confounding the results (Fayant et al., 2017; Hauser et al., 2018). A potential demand effect would suggest that participants might be inclined to disbelieve news that aligns with their own views. However, the requirement that participants also evaluate true news supporting their own views reduces this possibility. In addition, the fact that the study has a between subject design minimizes the risk of demand effect compared to within subject experiments (Lonati et al., 2018). Thus, we preferred not to use filler or distractor items in our study. Nevertheless, this remains a limitation of our study as a potential confounding factor that warrants control. Thus, we encourage future research to explore possible variations in the news evaluation process when filler or distractor items are introduced after manipulations.

Conclusion

The current study explores the potential role of reflective thinking in combating the spread of politically aligned fake news. Our findings cast doubt on the notion that reflective thinking plays a significant direct role in mitigating beliefs in fake news. Future studies should offer robust causal evidence and identify the boundary conditions of reflection to illuminate the transformative potential in confronting identity-based biases and foster a more informed public discourse.