Introduction

Fake news has recently attracted growing attention across disciplines (Allcott & Gentzkow, 2017; Arechar et al., 2023; Batailler et al., 2022; Chiluwa & Samoilenko, 2019; Lazer et al., 2018; Lewandowsky & van der Linden, 2021; Maertens et al., 2023; Pennycook et al., 2020; Pennycook & Rand, 2021; Shu et al., 2017; X. Zhang & Ghorbani, 2020; Zhou & Zafarani, 2020). Due to various cognitive, affective, and motivational factors, as well as to the professional appearance of much false information on the internet, many people struggle to detect fake news (e.g., Bago et al., 2020; Ecker et al., 2022). If fake news is ideologically loaded, detection may be even more difficult. The increased difficulty in detecting fake news that aligns with one’s political identity may be explained in one of two ways. First, self-serving judgmental biases like the tendency to preferentially accept information that confirms one’s own views (Jones & Sugden, 2001; Klayman, 1995; McKenzie, 2004; Mercier, 2017) can render it even more difficult to discern false from true information. Such biases may be viewed as motivational attempts to preserve prior views. Alternatively, a preference for confirmatory evidence may be viewed as rational, on the assumption that one’s prior beliefs are true (Batailler et al., 2022; Klayman & Ha, 1987; see also Musolino et al., 2022, e.g., p. 17). That is, if a belief is true, new evidence should be expected to be confirmatory and may not warrant any suspicion. Regardless of the underlying cause, this ideological belief bias – or the tendency to accept information that agrees with one’s partisan identities and to reject information that disagrees with them – induces people to be less likely to detect fake news when it conforms with their political views (e.g., Bago et al., 2020; Greene et al., 2021; Pennycook & Rand, 2019).

We report an experiment investigating the cognitive and motivational processes implicated in judging the veracity of (true and) fake news. Respondents from the United States judged a series of true and fake headlines taken from recent online news (Pennycook et al., 2021). All headlines were political in nature and were perceived as favorable toward one of the two major political parties (see the Materials and procedure section below). Based on participants’ self-reported political leaning, we categorized headlines as congruent (aligned) or incongruent (conflicting) with their political views. To examine the cognitive and motivational underpinnings of the ideological belief bias, we built on theories in motivation psychology (Gollwitzer, 1990, 2012) and political reasoning research (Kahan et al., 2012; Pennycook & Rand, 2021). In particular, we examined how being in a deliberative vs. implemental state of mind would affect the ideological belief bias in fake news detection. In addition, we assessed participants’ tendency to engage in cognitive reflection (Frederick, 2005) and their susceptibility to “pseudo-profound bullshit” (Pennycook et al., 2015).

The study’s design allowed us to study the ideological belief bias in a naturalistic setting that closely resembled a typical social media news feed. Our study had several aims. First, the experiment was designed to test two competing accounts of the effects of deliberation in fake news detection. The next section explains how we used the mindset theory of action phases (Gollwitzer, 1990, 2012; Keller et al., 2019) to derive predictions about how a deliberative state of mind relates to fake news detection from the perspectives of motivated reasoning accounts (Kahan, 2013; Kunda, 1990) and dual-process theory (Evans & Stanovich, 2013; Pennycook & Rand, 2021). Second, we aimed to replicate the ideological belief bias effects (e.g., Aspernäs et al., 2022; Calvillo et al., 2020) in the context of fake online news, contributing to the current debate about partisan bias (Gawronski, 2021; Pennycook & Rand, 2021). Finally, our rich dataset allowed further exploration of correct responses and errors from a signal detection perspective (Batailler et al., 2022) and of the relations among mindsets, cognitive reflection, and fake news detectability. These analyses suggest promising avenues for future research on motivational states and reasoning about political news content.

Background

Recent discussion in research on political reasoning concerns the role of deliberation (Bago et al., 2020; Calvillo et al., 2020; Pennycook et al., 2020; Pennycook & Rand, 2019). Proponents of motivated reasoning accounts (Charness & Dave, 2017; Kahan, 2013; Kahan et al., 2017) posit that deliberation may reinforce the ideological belief bias, because reasoning serves to defend and rationalize one’s position. On this view, people who are more inclined to reason about potential fake news should show a greater ideological belief bias. An opposing view, based on dual-process theory (Evans, 2008; Evans & Stanovich, 2013), assumes that deliberation should reduce the ideological belief bias because reasoning facilitates an unbiased assessment of new information (Bago et al., 2020; Pennycook & Rand, 2019). Recent research seems to agree that deliberation does not trigger politically motivated reasoning (Calvillo et al., 2020) and improves fake news detection (Bago et al., 2020; Pennycook & Rand, 2019).

However, such findings may be a result of participants in fake news research having the explicit goal of correctly identifying true and false headlines. Research on motivated reasoning observes that while motivation may bias reasoning toward people’s directional goals, people may also have accuracy goals (Kunda, 1990). When someone is motivated to get the correct answer, reasoning is expected to serve them well. Alternatively, given ideological motives, reasoning may exacerbate an ideological belief bias. To account for motivational influences on reasoning, we applied research on motivational processes to investigate the impact of motivational states of mind on the ability to detect fake news.

The mindset theory of action phases (Gollwitzer, 1990, 2012; Keller et al., 2019) emphasizes the distinction between goal-setting and goal-striving to understand how cognitive processes are attuned to different stages of goal pursuit. According to this theory, distinct sets of cognitive procedures (mindsets) are activated during goal-setting and goal-striving to facilitate performance of these fundamentally different tasks.

A deliberative mindset supports goal-setting by activating procedures that allow for a balanced consideration of the pros and cons of multiple goals (Achtziger & Gollwitzer, 2018; Gollwitzer & Bayer, 1999). The implemental mindset is characterized by a distinct set of cognitive procedures that strengthen persistence in goal-striving and shield the goal from competing temptations (Gollwitzer & Bayer, 1999; Keller et al., 2019). Building on mindset theory, we (a) investigate whether deliberative and implemental mindsets moderate information processing and political judgments in relation to true and fake news headlines on the internet, and (b) leverage the unique cognitive characteristics of the deliberative and implemental mindsets to test two competing accounts of deliberation in fake news detection against each other. Table 1 summarizes our predictions.

Table 1 Overview of predictions for deliberative and implemental mindsets (relative to a control condition) based on the mindset theory of action phases

Given their distinct characteristics, deliberative and implemental mindsets may differentially affect the perception and interpretation of political information. The deliberative mindset supports more thorough information search, favors a balanced consideration of the available evidence, and induces a sense of skepticism (Bayer & Gollwitzer, 2005; Büttner et al., 2014; Ludwig, Jaudas et al., 2020; Rasso, 2015). Being in a deliberative mindset is related to spending more time and effort on searching for evidence before making choices (Büttner et al., 2014; Ludwig, Jaudas et al., 2020), which can improve decisions. For example, a deliberative mindset enabled auditors to identify and incorporate (contradictory) information from various parts of complex audits, improving their ability to identify unreasonable accounting estimates (Griffith et al., 2015). More generally, the deliberative mindset is linked to open-mindedness and a broad attentional focus, including increased sensitivity to peripheral and incidental information (Fujita et al., 2007; Gollwitzer, 1990). In sum, a deliberative state of mind activates cognitive procedures that support searching, weighing, and evaluating evidence, thereby facilitating analytical thinking and reasoning.

Importantly, the cognitive procedures that characterize the deliberative mindset carry over to subsequent tasks. That is, the mindsets stay in place for some time even if the task that initiated the mindset is completed or on hold (Gollwitzer, 1990; Gollwitzer & Kinney, 1989). We leverage this quality of mindsets to study how their distinct properties affect judgments of true and fake news headlines.

Our main question is how being in a deliberative mindset relates to the ideological belief bias (relative to a control condition). If reasoning processes instigated by a deliberative mindset are primarily used for rationalizing one’s political views (e.g., Kahan, 2013), we expect the deliberative mindset to amplify the ideological belief bias, thereby reducing fake news detection for headlines aligned with one’s political preferences. If, on the other hand, deliberation facilitates unbiased information processing (e.g., Pennycook & Rand, 2019), we expect the deliberative mindset to reduce ideological belief bias and support fake news detection.

In contrast, the implemental mindset relies on self-serving biases to facilitate goal-striving (Armor & Taylor, 2003; Bayer & Gollwitzer, 2005; Brandstätter & Frank, 2002). Ideological belief bias may shield political attitudes from competing views that could require adjustments of one’s goals. We therefore expected that participants in an implemental state of mind (compared to a control condition) would be less likely to detect fake news if headlines aligned with their political preferences.

In addition to these nuanced predictions drawn from mindset theory, we sought to replicate patterns of fake news detection established by previous research. First, research on the ideological belief bias predicts that headlines that align with participants’ political views are more likely to be accepted as true (Aspernäs et al., 2022; Bago et al., 2020; Calvillo et al., 2020; Gampa et al., 2019). Accordingly, Democrat-leaning participants should be less likely to detect fake news favoring Democrat over Republican views and Republican-leaning participants should be less likely to detect fake news that favors Republicans.

Prior research has also found that performance on the cognitive reflection test (CRT; Frederick, 2005; Toplak et al., 2014) and the pseudo-profound bullshit receptivity scale (BSR; Pennycook et al., 2015) are associated with people’s detection of fake news (Pennycook & Rand, 2020, 2021; Sindermann et al., 2020). We expected to replicate these relationships. Performance on the CRT should predict more news correctly identified as true/fake, while high BSR scores should predict fewer correct classifications.

Additionally, there is evidence to suggest that mindset effects on cognition, judgment, and decision making may, to some extent, depend on gender (Hügelschäfer & Achtziger, 2014). Further analyses add gender as a predictor and explore its main effects and interactions with the experimental factors. Moreover, the nature of the ideological belief bias may vary across news type (true/fake). For instance, the effect could be more pronounced for fake than for true news. Finally, we performed exploratory analyses based on a recent proposal to view fake news from a signal detection theory perspective (Batailler et al., 2022), facilitating a more nuanced view of correct responses (hits, correct rejections) and errors (misses, false alarms) in judging politically loaded news. Signal detection theory also permits a decomposition of responses into measures of discrimination sensitivity and response bias (see Results section), yielding deeper insights into the cognitive and behavioral mechanics of fake news judgments.

Method

The hypotheses, design, materials and procedures, and analysis plan were peer-reviewed and then preregistered at PsychArchives (see https://doi.org/10.23668/psycharchives.5390). Ethical approval was given by the local ethics committee. We first report how we determined our sample size, all data exclusions, manipulations, and measures in the study.

Participants and design

Six hundred and one participants took part in a 15-minute online experiment hosted at www.soscisurvey.de. Participants were recruited via www.bilendi.us and were randomly assigned to one of the three between-subjects mindset conditions (control, deliberative mindset, implemental mindset). Compensation in the form of coupon-redeemable points was set by the provider based on the duration of participation in the study.

The target sample size of 600 was based on an a priori sensitivity analysis, which indicated that this sample would be appropriate to detect small effects (d = 0.121, η2 = 0.004) in a 3 (mindset) × 2 (news type: true vs. fake) × 2 (congruence: yes vs. no) mixed ANOVA (given α = 0.05, 1 – β = 0.90, and a moderate correlation of r = .50 between the repeated measures), the linear model most closely corresponding to the planned mixed-effect model analysis (see below).

Several exclusion criteria were put in place to ensure data quality. As preregistered, a total of one hundred and four participants were excluded for the following reasons: participants were younger than 18 (n = 5), had their current residence outside the US (1), rated all 18 headlines as true (7) or fake (32), or failed the basic plausibility check (59) of their responses in the mindset task (see below). After exclusion, the final sample size was N = 497. Given the above parameters for a sensitivity analysis, this sample is sufficient to detect effects of small size (d = 0.133, η2 = 0.004) in the simple linear model. To facilitate the calculation of signal detection theory parameters, we removed six more participants for the exploratory analyses.

Materials and procedure

Participants first provided informed consent, then proceeded to the mindset task, and rated the veracity of 18 news headlines. Lastly, they completed a small battery of further measures before debriefing, which contained feedback on whether each headline was true or fake.

Mindset induction

We induced deliberative and implemental mindsets following the procedures introduced by Gollwitzer and Kinney (1989; see also Ludwig, Jaudas et al., 2020; Rahn et al., 2016). To induce a deliberative mindset, participants considered the pros and cons of taking action on a current personal concern. We asked participants to select a concern that they had not yet decided how to resolve. They listed relevant pro/con arguments and rated them for valence and likelihood of occurrence. Participants in the implemental mindset condition selected a personal goal that they already had decided to pursue, but which they had not yet begun to achieve. They generated information about how, when, and where to act on several steps required to bring them closer to their goal. These procedures have been reliably found to activate deliberative and implemental mindsets (Achtziger & Gollwitzer, 2018; Gollwitzer & Bayer, 1999; Keller et al., 2019). Participants in the control group proceeded directly to the measurement of the dependent variables.

To check whether the mindset induction was successful, we included three measures at the end of the study: A two-item measure of determination to act according to their self-reported personal concern (rated on a 9-point scale from 0 to 8), a five-item goal commitment scale (Klein et al., 2001), and a single item of decidedness (“Where do you stand on the timeline regarding your personal concern or project?”) on a timeline from pre-decisional (coded 0) to post-decisional (100). The midpoint of the scale (50) was labeled as the moment of deciding. Implemental mindset participants should score higher on all three measures than deliberative participants (note that these items were omitted in the control condition). Mean comparisons (see the online supplement) suggested that the procedures successfully induced the intended mindsets.

News headlines

We selected headlines from a larger set of recent political news (Pennycook et al., 2021). Following these authors’ recommendation, we ran a pilot (N = 81) on a pre-selection of 26 headlines to assess familiarity with the news, accuracy ratings, and (assuming the headline was accurate) how favorable it would be to Democrats vs. Republicans. We then eliminated items with outdated political content and high familiarity ratings (if more than 10% of participants had seen the headline or heard about it). The final set of 18 headlines contained nine headlines favoring Democrats (5 true, 4 fake) and nine headlines favoring Republicans (5 true, 4 fake). In the pretest, participants rated 56% (Democratic) and 50% (Republican) of the true headlines as true, whereas fake news headlines were rated as true at a rate of 27% (Democratic) and 31% (Republican). The average distance to the midpoint (3.5) of the political preference scale (1 = Strongly in favor of Democrats, 6 = Strongly in favor of Republicans) was approximately symmetric for Democrat- and Republican-leaning headlines (-0.66 for the set of nine headlines favoring Democrats and 0.71 for the headlines in favor of Republicans), see Figure A1 in the online supplement for more details.

In the main study, we presented 18 headlines in randomized order (the set of headlines is available at https://osf.io/r43fm). For each headline, participants responded yes or no to the question: “To the best of your knowledge, is the claim in the above headline accurate?” We categorized each headline as aligned (congruent) with or opposed (incongruent) to participants’ political views based on the above pretest ratings of the headlines and participants’ self-reported political preference (1 = Strongly Democratic, 6 = Strongly Republican).

Further measures

We assessed receptivity to pseudo-profound bullshit (BSR) and cognitive reflection (CRT) because these measures have been associated with fake news detection in prior research (e.g., Pennycook & Rand, 2020). Receptivity to pseudo-profound bullshit was assessed by the BSR scale (Pennycook et al., 2015), the internal consistency of the measure was good in this study, Cronbach’s α = 0.92.

Cognitive reflection was assessed with a 4-item version of the classic CRT (Frederick, 2005; Toplak et al., 2014). This is a small deviation from the pre-registration. We had planned to use a modified version of the CRT (Ludwig & Achtziger, 2021), which would have had the advantage that correct answers could not be found online. Due to a programming error, the original CRT (without the bat-and-ball item) was displayed in the study instead of the modified variant. During the CRT (Cronbach’s α = 0.70), we recorded how many times participants left the browser tab running the study, which may represent attempts to find the correct response through a quick web search. Because participants were instructed to work alone and not use any auxiliary means of answering questions, the number of clicks served as a behavioral indicator for dishonesty (Ludwig & Achtziger, 2021; see also Ludwig et al., 2023). We investigated the relationship between this variable and mindsets and political preference (see exploratory analyses).

Analytical approach

The independent variables in our design are mindset (between), news type (within) and congruence with political preference (within). The main dependent variable is the veracity rating for each headline, which we converted into a variable reflecting the correct identification of news as true/fake (coded 1 for correctly identified as true/fake). For hypothesis testing, we implement mixed-effect models using the lme4 package for R (Bates et al., 2015; R Core Team, 2022). Mixed-effect models have some advantages over traditional ANOVA (see e.g., Brown, 2021; Jaeger, 2008; Judd et al., 2012), for instance, they avoid aggregation of data. We used a logistic mixed-effect regression to analyze headline choices and a linear mixed-effect model to analyze response times. We entered random intercepts for participants and headlines, and by-subject random slopes for news type (true/fake). This resulted in the following base model:

DV ~ mindset + news type + congruence + (1 + news type | participant) + (1 | headline).

We evaluated the statistical significance of the predictors in this model by likelihood ratio tests (LRT; comparison against reduced model without the fixed effect in question). Moreover, we added interactions between the factors and further variables (CRT, BSR, gender) stepwise. On each step, a likelihood ratio test compared the extended model to the base model, and we retained predictors for the next step if they improved model fit, as indicated by ∆AIC ≥ 2 (see e.g., Burnham & Anderson, 2002). Regression result tables were created with sjPlot (Lüdecke, 2020).

Results

Table 2 (see also Fig. 1) summarizes veracity ratings of true/fake news headlines across mindset conditions. Veracity ratings in all three mindset conditions were consistent with an ideological belief bias for both true and fake news. That is, participants more accurately classified true headlines as such when the news was congruent with their political views. Participants were less able to detect fake headlines when the news was congruent.

Table 2 Share of incongruent and congruent true and fake news headlines correctly identified
Fig. 1
figure 1

Share of correctly identified fake and true news across congruent (aligned) and incongruent trials (opposing political views) in the control, deliberative, and implemental mindset conditions. *** p < .001

Hypothesis tests: mindsets and the ideological belief bias

The pre-registered stepwise mixed-effect regression is summarized in Table 3. For enhanced readability, the table only presents steps that improved the model fit. As described above, we first estimated a baseline Model 1, entering mindsets, news type, and congruence as fixed effects, random intercepts for participants and headlines, and by-subject random slopes for news type. Regression estimates indicated that true headlines were generally less likely to be correctly identified as true (relative to fake news correctly identified as fake), OR = 0.24 with 95% confidence interval [0.14, 0.44], z = -4.65, p < .001. Across news type, participants were more likely to correctly classify congruent than incongruent news, OR = 1.51 [1.37, 1.67], z = 8.34, p < .001.

To test the hypotheses that an ideological belief bias attenuated the detection of partisan fake news, we evaluated the news type × congruence interaction in Model 2. This interaction was statistically significant, χ2(1) = 157.72, p < .001. Accordingly, participants were less likely to correctly identify fake news as fake if it conformed with their political views, OR = 0.67 [0.57, 0.78], z = -5.01, p < .001. Additionally, they were substantially more likely to accept true news as true when it aligned with their views, OR = 3.72 [3.03, 4.55], z = 12.66, p < .001. This result was in line with the prediction and earlier research on political reasoning (Aspernäs et al., 2022; Calvillo et al., 2020; Gampa et al., 2019), suggesting an ideological belief bias in judging fake news (see also Bago et al., 2020).

Next, we turn to our hypothesized motivational and cognitive moderators of ideological belief bias. With regard to mindsets, our manipulation check suggested that the induction of deliberative and implemental mindsets was successful (see Table A2 in the online supplement). We assessed the three-way interaction of mindsets, news type, and congruence to evaluate the hypothesis that the deliberative and implemental mindsets moderate the ideological belief bias. As indicated by the LRT, adding this interaction to the model did not improve the fit, χ2(6) = 5.92, p = .432 (see online supplement). Hence, the hypothesis of a moderating role of mindsets in fake news detection was not supported in the present analysis (however, see the signal-detection analysis below). Finally, Model 3 suggested that higher CRT performance was associated with more headlines correctly identified as true or fake, OR = 1.12 [1.07, 1.17], z = 4.76, p < .001 (LRT χ2(1) = 22.01, p < .001). Accordingly, one more CRT item solved correctly increased the odds of correctly identifying news headlines by 12%. Neither gender nor bullshit receptivity was related to fake news detection.

Finally, we examined response times across correct responses and errors in the headline task. On average, participants took around ten seconds to rate each headline’s veracity. For fake news, errors (falsely rating a fake headline as true) were slower than correct rejections, which might suggest that participants were unsure of the right answer and required additional time to deliberate before responding. For true headlines, we observed that errors (falsely rating a true headline as fake) were faster than correct responses, but only when the news was incongruent with participants’ political views. This might be suggestive of an ideological belief bias leading to a swift rejection of incongruent headlines, though such conclusions should be tempered in light of the different pattern of response times for fake headlines. Tables A5 and A6 in the online supplement report descriptive statistics and additional analyses (see also Alós-Ferrer, 2018; Ludwig, Ahrens et al., 2020).

Table 3 Results of the logistic mixed-effect regression for correct classification of headlines

A signal-detection analysis of fake news headlines

Recently, Batailler et al. (2022) demonstrated a fruitful application of signal-detection theory (SDT) to fake news detection. Under SDT, rating a headline as true is categorized as a Hit when the headline is accurate, or as a False Alarm (FA) when the headline is fake. Responding false to a true headline represents a Miss, while responding false to a fake headline is a Correct Rejection (CR).

Applying the SDT perspective to our hypotheses, we expected the following patterns of Hits/FAs and Misses/CRs. Given that a deliberative mindset should increase reasoning, this should either cause additional open-minded evaluation of the headline (under the dual-process account) or additional partisan justification of the headline (motivated reasoning account). As such, under a deliberative mindset, dual-process theory predicts fewer FAs, while motivated reasoning predicts more FAs. Likewise, for Hits and Misses, a motivated reasoning account predicts more Missed true headlines, while dual-process theory predicts a greater tendency to Hit when headlines are true. We tested these hypotheses but note that these analyses were not preregistered.

We analyzed fake headlines (coded 1 for FA) and true headlines (coded 1 for Miss) separately. In so doing, we used logistic mixed-effect regressions, following a step-up procedure as described above. Table 4 summarizes the results. For fake headlines, there was a slight decrease in FAs in the deliberative mindset condition, OR = 0.75 [0.57, 0.98], z = -2.11, p = .034. Political preference and receptivity to pseudo-profound bullshit also predicted False Alarms. Both Republican political leaning, OR = 1.88 [1.51, 2.35], z = 5.61, p < .001, and BSR, OR = 1.39 [1.24, 1.55], z = 5.60, p < .001, were associated with more FAs (see Table A7 in the online supplement). In addition, we observed an interaction between political preference and BSR (see also Table 4), indicating that the positive relation between BSR and FAs was significantly weaker for Republicans than for Democrats.

Table 4 Logistic mixed-effect regression results for errors in classifying fake headlines (FAs; left) and true headlines (Misses, right)

In response to comments from anonymous reviewers, we checked the robustness of these findings in four additional analyses. The first two analyses address concerns about the classification of Democrats/Republicans on a bipolar scale. First, to ensure that our findings describe partisans, we reran our analysis excluding participants who self-categorized as “Independents” (n = 123). Second, to check if deliberative and implemental mindsets might only affect participants with weak political attitudes, we created an indicator of strength of political preference and examined its interaction with mindsets. Strength of political preference was indicated by the distance of participants’ political orientation rating to the midpoint of the scale (3.5). Next, we conducted two further robustness checks to account for CRT scores which may have been inflated by cheating. In the third analysis, we added the cheating indicator (coded 1 for cheaters) to the regression model. Finally, we repeated the analysis including only the subsample of honest participants. Results did not change substantially across three out of the four robustness checks. The exception was the first robustness check, which reduced the effect of being in a deliberative mindset below statistical significance, although the effect size remained similar. This is likely due to the much smaller sample size in this analysis.

Parallel analyses for true headlines indicated no effects of mindsets. However, we found that Republicans were more likely to falsely classify true news as fake, OR = 1.29 [1.08, 1.54], z = 2.79, p = .005, while both cognitive reflection and BSR were negatively related with Misses (see Table A8 in the supplement). As shown in Table 4, political preference interacted with both CRT and BSR, suggesting that the overall negative relation between BSR and Misses was driven by Republican-leaning participants, while the negative relation between CRT and Misses was significantly stronger for Democrats. Results remained largely robust across the four control analyses; with the exception that the interaction between political preference and CRT no longer reached statistical significance.

Beyond analyzing Hits/FAs and Misses/CRs, SDT decomposes responses into two key indices describing headline veracity ratings (Batailler et al., 2022). First, discrimination sensitivity (d’) captures how well individuals can discriminate between true and fake news. Perfect sensitivity would correspond to a probability of 1 for both labeling true headlines as true and for classifying fake headlines as false. Second, response bias (c) describes the threshold of perceived veracity beyond which a headline will be rated as true. A smaller (more liberal) response bias will lead to a greater tendency to respond that headlines are true, increasing both Hits (when headlines are true) and FAs (for false headlines). On the other hand, a greater (more conservative) response bias yields more headlines rated as false and will thus lead to more CRs (in the case of fake headlines), but also more misses (when headlines are true; see Batailler et al., 2022, for more details on these indices).

Batailler et al. (2022) identified three predictions for d’ and c in the context of fake news detection, which can be tested in our dataset. First, people’s tendency for cognitive reflection, as indexed by their CRT score, should promote greater discrimination sensitivity (d’). Second, ideological belief bias should be evident in response bias (c). The threshold for judging news as true should be higher when headlines are incongruent as opposed to congruent (i.e., it should take more evidence to overcome the tendency to reject incongruent news). Third, a motivated reasoning account would predict an interaction between cognitive reflection and headline congruence on c. If partisans engage in motivated reasoning, increased reflection should lead to a lower threshold for judging ideology-congruent news as true, relative to incongruent news. This effect should be more pronounced in participants who are more reflective.

Batailler et al.’s re-analysis of data from Pennycook and Rand (2019) found support for the first two predictions, but not for the third. Our dataset allows us to re-examine these predictions and to add new insights. Additionally, novel predictions may be derived from the mindset intervention. If a deliberative mindset increases reasoning, we expect results consistent with Batailler et al.’s first prediction, with participants in the deliberative condition showing greater d’. At the same time, an implemental mindset should increase response bias for incongruent headlines.

We calculated d’ and c separately for congruent and incongruent headlines and analyzed the indices with linear mixed-effects regression, including random intercepts for participants. We added mindset condition and congruence as fixed effects. In a step-up procedure like the one described above, we also evaluated CRT, BSR, and interactions between the fixed effects. Results are summarized in Table 5.

Table 5 Linear mixed-effect regression results for the analysis of signal-detection parameters for discrimination sensitivity (d’) and response bias (c)

First, we replicate Batailler et al.’s findings that cognitive reflection (CRT score) predicts higher sensitivity (d’), see Table A9 in the online supplement. That is, more reflective participants showed better discrimination between true and fake headlines. However, our analyses indicated that political preference moderated the relation between CRT and d’. Accordingly, the association between CRT and d’ was weaker for Republicans than for Democrats, see Table 5. Moreover, we also observed a significant interaction between congruence and political preference on d’. While both Democrats and Republicans had higher sensitivity for congruent than for incongruent news, this relationship was weaker for Republicans. We checked the robustness of these findings, again by running supplementary analyses as described above. The results remained robust across all additional analyses.

We also replicate Batailler et al.’s finding that response bias (c) is lower for congruent, relative to incongruent headlines, see Table A10 in the supplement. However, we observe that political preference also moderated this relation (see Table 5). Accordingly, while both Democrats and Republicans had a lower (more liberal) response bias for congruent headlines, this relationship was much stronger for Republicans. This observation suggests the ideological belief bias was stronger for Republican-leaning participants in our sample. Moreover, we find that BSR negatively predicts response bias. Hence, participants who were more susceptible to pseudo-profound bullshit had a more liberal veracity threshold, resulting in more (fake and true) headlines classified as true. These results did not change substantially across the four robustness checks.

Finally, with regard to mindsets, there was little evidence of any effects on either d’ or c. However, we point out the descriptive finding that participants in a deliberative mindset, relative to controls, had slightly higher (more conservative) response bias (but note that this relation did not reach conventional levels of statistical significance). This observation may suggest that the deliberative mindset increased skepticism across the board, raising people’s veracity threshold.

CRT performance and cheating

Consistent with previous research (Pennycook & Rand, 2020; Sindermann et al., 2020), cognitive reflection emerged as an important predictor of fake news detection. For further exploration, we examined the association of mindsets, gender, BSR, and cheating (as indicated by the number of times participants changed browser tabs) to CRT performance. Given that classic CRT assessments can be distorted by participants looking up answers online (Ludwig & Achtziger, 2021; Ludwig et al., 2023), it is theoretically and practically important to understand exactly how this measure of cognitive reflection relates to, and potentially predicts, improved fake news detection.

A proportional odds logistic regression performed with the MASS package for R (Venables & Ripley, 2002) revealed an interesting pattern of CRT performance depending on gender and mindsets. Table 6 (see also Figure A2 in the online supplement) shows that males generally outperformed females on the CRT, a common result that indicates a bias in the measure rather than genuine gender differences in cognitive reflection (Alós-Ferrer et al., 2016; Ludwig & Achtziger, 2021; Ring et al., 2016; Zhang et al., 2016). Interestingly, females’ performance on the CRT benefitted from being in an implemental mindset, OR = 2.01 [0.90, 4.47], t = 2.37, p = .018. Females’ odds of scoring higher on the CRT were increased by 101% in the implemental mindset relative to the control group. This increase was not present for males, OR = 0.44 [0.23, 0.87], t = -1.99, p = .047.

Table 6 Proportional odds logistic regression for CRT performance (score range 0–4)

Discussion

In this study, we investigated relations among deliberative and implemental mindsets, ideologically biased judgment, and (motivated) reasoning about fake news. Our primary goals were to examine the cognitive and motivational processes underlying fake news detection and to use the resultant fine-grained understanding of these processes to shed light on ongoing debates about the function of reasoning.

Results from our online experiment suggest that ideological belief bias influences judgments of headline veracity, consistent with earlier findings in research on political reasoning (Aspernäs et al., 2022; Calvillo et al., 2020; Gampa et al., 2019) and fake news detection (Bago et al., 2020; Gawronski, 2021). Across mindset conditions, participants were more likely to correctly identify fake news as false information when it was incongruent with their political views (see also Fig. 1). For true headlines, we observed the reverse. Headlines that opposed participants’ political orientation were less likely to be correctly identified as true.

These congruence results suggest that people are inclined to believe news reports that support their political positions. As we noted in the introduction, this tendency may be viewed as rational or irrational. If people are motivated to believe information that confirms their biases and form beliefs based solely on this motivation, this would be irrational. Alternatively, people may simply be more inclined to trust information sources that agree with them. For example, given a strong trust in science, the report of a scientific paper may be deemed trustworthy without any motivational needs entering the picture. The present study did not attempt to disentangle these sources of ideological belief bias. However, once reasoning is engaged (cf. Sommer et al., 2023), a similar question arises regarding the functional role of reasoning processes. Motivated reasoning accounts (e.g., Kahan, 2013) assume that reasoning functions to bolster the ideological belief bias by seeking confirmatory evidence and rationalizing counterevidence. In contrast, dual-process theories (Evans & Stanovich, 2013; Pennycook & Rand, 2021) propose that reasoning is balanced and serves to counteract the ideological belief bias. By manipulating motivational states and measuring cognitive reflection and bullshit receptivity, we aimed to clarify the role of reasoning in fake news detection.

Our experiment failed to produce clear evidence regarding the competing predictions derived from motivated reasoning accounts and dual-process theory. Mindsets did not moderate the ideological belief bias. There are several potential reasons why our mindset manipulation did not show the expected effects, despite the manipulation check indicating the mindset induction was successful. First, mindset effects on judgment and decision making are generally small in terms of effect size. Although our experiment was well-powered, even larger group sizes might be necessary to demonstrate rather small mindset effects on fake news detection. Second, we induced mindsets in an unrelated task prior to the headline ratings. It is possible that inducing deliberative and implemental mindsets that are directly linked to the headline task, i.e., while participants read and judge the news, could result in stronger mindset effects. Third, participants might have had goals other than accuracy in the rating task (e.g., being a good party member), which could have reduced the strength of mindset influences. Measuring participants’ goals, using a mindset induction procedure more closely tied to the headline task, or incentivizing accuracy could address these concerns. Fourth, it could be argued that the deliberative mindset induces both more thorough reasoning, including stronger tendencies for rationalization as well as more open-minded, unbiased evaluation of evidence. While our above hypotheses rested on the implicit assumption that one of these characteristics of reasoning might dominate under a deliberative mindset, it is also possible that they canceled each other out on average, effectively producing a null result. Finally, as we discuss in more detail below, female participants induced into implemental mindsets showed a marked improvement on the CRT. As better CRT scores are associated with improved fake news detection, higher cognitive reflection in the implemental condition may explain our failure to find support for the hypothesis of reduced fake news detection under an implemental mindset.

While we fail to find support for the predicted mindset effects on fake news detection, we replicate and extend results of cognitive processes that inform debates on the role of reasoning. First, we replicate findings that cognitive reflection (CRT score) predicts fake news detection (e.g., Pennycook & Rand, 2020, 2021; Sindermann et al., 2020), supporting the dual-process view that reasoning supports a balanced consideration of evidence. We further replicate Batailler et al.’s (2022) results indicating that CRT score is associated with higher discrimination sensitivity (d’). Moreover, our results extend this finding, indicating that political preference moderated the relationship between CRT and d’. Only Democrats, but not Republicans, showed an increased d’ with higher CRT scores. We also find a significant interaction between congruence and political preference on d’. That is, while both Democrats and Republicans had higher sensitivity for congruent than for incongruent news, this relationship was significantly weaker for Republicans. We also replicate results showing a lower response bias (c) for congruent, relative to incongruent headlines (Batailler et al., 2022) and identify a novel moderator in the form of political preference. Though both Democrats and Republicans displayed a more liberal response biases for congruent news, Republicans showed a greater tendency in this direction. These findings suggest that political preference may be an important moderator of the complex relationship between reasoning and fake news detection. Additionally, we replicate the absence of an interaction between cognitive reflection and c (Batailler et al., 2022), suggesting that motivated reasoning did not have a large effect in our study.

In addition, applying signal detection theory to fake news (Batailler et al., 2022) suggested a potential nuanced role of mindset dynamics in fake news detection. For instance, if the deliberative mindset increases skepticism, this could reduce the presumed truth of headlines, leading to fewer Hits but also fewer False Alarms (and thereby more Misses and Correct Rejections). In the above main analysis, this would not be picked up because overall accuracy might remain constant. As a concrete example, suppose a control participant had 10 correct ratings, comprising 5 Hits and 5 CRs, and 8 incorrect ratings, comprising 4 Misses and 4 FAs. The same person, if induced to be skeptical, might reduce their tendency to say “true,” thereby reducing their Hits by 2 (= 2 additional Misses) but also increasing their CRs by 2 (= 2 fewer FAs). This would give them 3 Hits, 7 CRs, 6 Misses, and 2 FAs, or 10 correct and 8 incorrect – the same pattern as before the mindset induction, despite the changes in their responses.

Modeling false alarms, we observed that a deliberative state of mind was associated with producing fewer FAs (or more correct rejections, respectively). This observation suggests that the deliberative mindset helped to prevent erroneously rating fake headlines as true, pointing to the mindset’s potential to improve fake news detection. It is also generally consistent with a dual-process view that predicts fewer FAs in the deliberative mindset. Additionally, this deliberative effect may reflect a difference between two types of motivated reasoning: accuracy-motivated reasoning and directionally-motivated reasoning (Kunda, 1990). While motivated reasoning is stereotypically associated with the latter, more self-serving, style of reasoning, people may also reason with a goal of attaining the correct answer. The deliberative mindset may make people more likely to reason with open-minded accuracy goals (Hügelschäfer & Achtziger, 2014; Keller & Gollwitzer, 2017), leading to their improved discrimination of fake news headlines. We emphasize that this finding is exploratory, the effect is relatively small, and was maintained in only three out of four robustness checks. Further research should ascertain the robustness of the beneficial impact of being in a deliberative state of mind. If this result can be replicated, it may open new avenues for future research on motivational states and fake news detection, which may produce novel methods of boosting people’s resilience to mis- and disinformation (Ecker et al., 2022; Lazer et al., 2018; Lewandowsky & van der Linden, 2021).

Exploring the relations among mindsets, cognitive reflection, and cheating, we observed a notable improvement on the cognitive reflection test for females in an implemental state of mind (see also Figure A2 in the online supplement). This observation is consistent with the idea that mindsets produce distinct beneficial effects for females and males in cognitive performance tasks (Hügelschäfer & Achtziger, 2014). Prior research suggested that males tend to be overconfident in their cognitive task performance, while females underestimate their abilities (e.g., Barber & Odean, 2001). Mindsets are also related to individuals’ confidence in cognitive tasks (Hügelschäfer & Achtziger, 2014), which may allow them to impact participants’ confidence on the CRT. By enabling a more balanced view of oneself, the deliberative mindset reduces confidence in one’s own skills. The implemental mindset, on the other hand, boosts self-serving cognitions and thereby augments self-evaluation and confidence assessments (Bayer & Gollwitzer, 2005; Hügelschäfer & Achtziger, 2014). For females, this boost in confidence may counteract underconfident self-assessment and its detrimental behavioral consequences for performance (see e.g., Dahlbom et al., 2011; Jouini et al., 2018).

It is noteworthy that females perform so much better when in an implemental mindset because this could relate to our failure to detect the predicted response patterns for implemental participants. Participants in an implemental mindset were expected to show a stronger manifestation of the ideological belief bias, which was predicted to reduce fake news detection. However, being in an implemental state of mind improved (females’) performance on the cognitive reflection test, which is associated with better fake news detection. This might suggest that the true effect of mindsets on fake news detection is complex and was obscured in the present dataset. Further research is required to disentangle gender-specific mindset effects on cognitive reflection and fake news detectability. Future studies in this area would benefit from exploring how and why self-confidence and cognitive reflection predict fake news detectability.

We observe that the cheating indicator (number of browser tab changes; see Ludwig & Achtziger, 2021; Ludwig et al., 2023) predicted performance on the cognitive reflection test, despite the fact that performance was not incentivized. Participants could not earn additional money for performing well on the task, but still seemed to be motivated to search for the correct solutions on the web. Being categorized as a cheater (if at least one browser tab change was recorded) increased the odds of scoring higher on the CRT by 208% (cf. Table 5). We investigated a possible relationship between cheating and mindsets but found no statistically significant group differences for the full sample, see Table A3 in the online supplement for further detail and additional analysis. When extreme outliers (with more than eight tab changes during the four-page CRT section of the questionnaire) were excluded, there was a tendency for more cheating in the implemental mindset. However, given that performance was not incentivized, future research should seek to further explore the impact of mindsets on cheating behavior in circumstances where dishonesty pays.

One related limitation of this study is the lack of explicit incentives in the headline task. In future studies, it would be useful to incentivize veracity ratings because incentives can improve measurements by reducing the performance variability of judgment tasks (Camerer & Hogarth, 1999; Hogarth et al., 1991; Smith & Walker, 1993). The headline task may have other limitations, namely outdated content and participant familiarity with particular news stories included in the task. Our pretest sought to mitigate these concerns, but we cannot rule out these effects entirely. Finally, we acknowledge that our classification of participants as leaning Democrat or Republican has limitations. On an alternative question on party affiliation, a sizeable share of participants self-identified as Independents, rather than leaning toward one of the parties. There are different ways to address this issue. Some might suggest dropping the data of Independents, because they can obscure the examination of partisan bias. We decided to retain their data and relied on the bipolar measure to capture political leaning. In addition, we ran supplemental analyses, excluding Independents and adding an index of strength of political preference to our regressions, to check the robustness of our findings with the full sample. We acknowledge that this approach can be improved to more accurately reflect the political landscape in the United States.

Finally, we note that our publicly available dataset may be a rich source of hypothesis discovery for researchers interested in fake news detection. The dataset includes numerous variables potentially relevant to fake news (e.g., political affiliation, mindsets) and several measurements related to cognitive reflection and ideological bias (CRT, BSR, headline congruence). This data may permit exploratory analyses that may contribute to new experiments in this growing field.

In conclusion, our study holds important new insights for the discussion of fake news detectability in psychological research and beyond. We present initial evidence to suggest further consideration of the deliberative mindset as a tool to reduce individuals’ susceptibility to false information in the online media. Future research on the topic should focus on the signal detection framework to further examine how motivational states relate to judgmental errors in the evaluation of fake news content, and political reasoning more generally.