1 Introduction

Because our collective understanding of the world has never been so vast, we individually believe we understand a large part of it. But, most of the time, our justifications for these beliefs comes from the expertise of others. Yet those on whom we rely might have different epistemic goals than ourselves. They might, for instance, have different validation standards in order to believe a proposition or simply different practical interests. Consider the example of beliefs regarding potential side effects of COVID-19 vaccines relying on mRNA technology. When first commercialised, tests on these vaccines had only been conducted by the pharmaceutical companies who produced and sold them, without time for independent replication or deepening on rare effects (Tanveer et al., 2022). What were we then justified to believe regarding the safety of these vaccines? More generally, what are we justified to believe when we have diverging interests from those on whom this justification relies?

Such settings have been described as cases of epistemic conflict of interest (Henderson (2020), Müller (2022) for a systematic review). They are challenging situations when it comes to belief justification, especially in social contexts and when the justification relies entirely on expert testimony. As a matter of fact, when it comes to belief justification, existing theories either recommend avoiding epistemic conflicts of interest or ignoring them.Footnote 1 Arguably, this is an important epistemological limitation. A theory of testimony-based justification for belief when there is epistemic conflict of interest is still lacking. While epistemic conflict of interest should have an impact on justification, that impact should not be all or nothing. Take the example of the COVID-19 vaccines mentioned earlier. Considering that it was fully unjustified to believe these vaccines were safe, without any graded notion of justification, seems unsatisfying. Such radical positions would open the door to a variety of conspiracy theories. Thus, a theory that comes in degrees, capable of explaining what beliefs we are justified to hold and why, despite epistemic conflict of interest, is still lacking.

The reason why providing such a theory is so difficult is that epistemic conflict of interest creates situations which are strategic: agents know that their interests conflict and that trust is not granted a priori. This paper aims at addressing the question of the justification of a hearer’s beliefs based on a speaker’s testimony only, when there is epistemic conflict of interest between them. It will do so through game theory and specifically by introducing one of its central object: equilibrium concepts. Equilibrium concepts are assumptions on the beliefs agents hold regarding the behaviour of other agents. For instance, in a Nash equilibrium, all agents believe that actions are stable if no agent can benefit from a unilateral deviation. It follows that, at equilibrium, the behaviour of players is predictable. Therefore, even under epistemic conflict of interest, information provided at equilibrium cannot be deceiving.

Following Zollman (2021)’s suggested use of game theory for epistemic problems, and by further introducing the behavioural beliefs to lead to the Nash equilibrium, I will argue that it can be justified to ground belief on a speaker’s testimony alone, even when her epistemic goals diverge from those of the hearer. Yet this belief will always be less accurate, in a specific sense, than the speaker’s original belief. For instance, assume the speaker is a climate scientist who has good reasons to believe that a 2\(^{\circ }\)C increase in temperature will lower the current global GDP by 10 percentage points. Under epistemic conflict of interest, a hearer will typically only be justified to belief close to that value, but not equal to it. Importantly, this justification only emerges at equilibrium. Equilibrium behaviours appear in the long run, after a long repetition of trial and error. In practice, these equilibrium mechanisms materialise as social and scientific norms and are embodied through institutions (Bicchieri, 2016; Fehr et al., 2002; Henrich & Muthukrishna, 2021). My results thus contribute to highlighting the importance of these social elements for science to be an influential and reliable basis of beliefs.

2 Existing views and limitations

Can I rely on the testimony of others to acquire justified belief? This problem has been called the Source Problem for testimonial entitlement. As argued by Hardwig (1985), for a hearer to be justified to ground a belief fully on a speaker’s word, two conditions need to be reached:

  1. (a)

    The speaker must be an intellectual authority on the subject.

  2. (b)

    The hearer must be justified in deferring to the speaker regarding her claim.

While most philosophers agree upon condition (a), two sides have been taken regarding (b). In the tradition of Kant’s imperative for epistemic independence, authors such as Chisholm (1989) have defended the idea that beliefs are justified only through the verification abilities of perception or logic. Therefore, this side has been called reductionist. Yet others have argued that testimony could be a primary source of justification without the need to appeal to other warrants. For proponents of this anti-reductionist side, the central idea is that the default epistemic position regarding a testimonial source should be trust. It is clear that the latter position is controversial and demands strong arguments in its favour. Mainly, two objections have been commonly raised against it:

  1. (i)

    Agents can have different practical, moral and epistemic motivations.

  2. (ii)

    Agents can lie.

Several distinct positions have been held on the anti-reductionist side regarding the Source Problem. I will review them and show that they all leave the case of justification under epistemic conflict of interest aside. They are prima facie theories, arguing that it is justified in believing p, absent defeat such as epistemic conflicts of interest. While I agree with their conclusions in non problematic cases, they also fail to address important situations where testimonial justification seems to hold. An ultima facie theory offering a deciding upon belief justification in presence of an important defeater such as epistemic conflict of interest is, I believe, still lacking.

Ruling out epistemic conflicts of interest Although a variety of anti-reductionist theories have been proposed in recent years, most of them support the idea that testimony can be a justification for belief only if the speaker has no systematic interest in lying. That is, avoiding objection (ii) above.

A first approach to deal with objection (ii) has been made in Goldberg (2010, 2014). Essentially, Goldberg offers a fallibilist argument: lies can be accounted for as simple mistakes in the justification process. What matters for justification is that speaker and hearer employ the same cognitive process for the formation of their belief. In response to this view, authors like Ross (1986), Hinchman (2005, 2014) have argued that testimony is not evidence of the same kind that our senses can give us. Memorial failure is a random process which can sometimes occur, whereas insincerity is a voluntary one, which can be, at least to some extent, systematically identified. This view has been called the assurance view. It stresses the importance of interpersonal relationships in justification. According to this view, when a hearer acquires testimonial justification for believing a proposition on the basis of a speaker’s claim, his belief is at least in part justified by the speaker’s assurance. This assurance, by nature, is non-evidentialist. Moran (2006, 2018), Faulkner (2007, 2011), Fricker (2012), Zagzebski (2012) and McMyler (2011) further stress the importance of the speaker’s intentions, when informing a hearer about a proposition. In the particular case of epistemic conflict of interest, the fallibilist approach is arguably unsatisfactory: lies are not the result of mistakes, but of clear intentions.

Following the assurance view, Graham (2010, 2015) argue that in practice, what a hearers oughts to do is to set up a filtering process that narrows the range of entitlement conferred by testimonial exchanges to situations where no epistemic conflict of interest is present. Yet, as argued by Simion (2021), one limit to this argument is that our deception detection abilities are, in most contexts, very limited and mostly unable to provide us with a fine tuned filter. In other words, we are bad at detecting problematic situations. Consider the vaccine and climate change examples I introduced above. For a non-expert audience, it is almost impossible to detect problematic claims in technical domains with which they are not familiar. This is particularly true of new or unexpected situations such as the global COVID-19 pandemic. Then, it was very hard for a non-expert audience to distinguish between diverging expert claims. At least in these situations, a conscious filtering process seems insufficient to grant justification to beliefs acquired through testimony under epistemic conflict of interest.

A more radical line of defence has been held regarding anti-reductionism, essentially by Burge (1993, 1997, 2013). For Burge, we are a priori prima facie entitled to take intelligible affirmation at face value (Burge, 1993, p. 472). Burge’s argument is mostly about rationality. Its function is to generate true content. So, if agents are rational, testimony about knowledge should by default generate justified beliefs. Recently, Simion (2021) supported this view by arguing that it is because of the presence of a social contact between the speaker and the hearer that the hearer is prima facie entitled to form beliefs based on speaker’s assertions. Both Burge and Simion’s theories on testimony as a source of justification for belief only apply in non-problematic situations, namely when no systematic epistemic conflict of interest exists. In Simion’s words: “If I know [...] that you stand to gain from lying to me, I am not entitled to expect norm conformity anymore.Footnote 2” I agree with Burge and Simion that, without further argumentation, a social contract does not suffice to explain belief justification in situations of epistemic conflict of interest. However, I believe that there are situations where testimony provides such a justification, despite the presence of epistemic conflict of interest.

Two classes of problematic cases Excluding all situations where a speaker could use deception to her advantage from the range of testimony-based justified beliefs seems very restrictive. The literature on epistemic conflicts of interest has brought up numerous cases where testimony still seems a plausible justification for belief.Footnote 3 Consider the two following generic examplesFootnote 4:

  1. 1.

    An obvious case is one where the speaker cares about the belief of the hearer and of its truth, but also has other practical motivations.Footnote 5 Prominent cases include the COVID-19 vaccines relying on mRNA technology. Clearly, the pharmaceutical companies that have produced them are experts in these technologies (condition (a)). When they were first adopted during the pandemic, the urgency of the situation compelled healthcare institutions to adopt them before normal peer reviewed testing and certification could be done in order to assess their reliability. So the confidence we had in our belief regarding the safeness of these vaccines had to rely fully on their producers’ word. In addition, pharmaceutical companies also had an interest in selling their vaccines and potentially in downplaying any side effect or disadvantage they may have had. So it was unclear if condition (b) was holding. For instance consider the following proposition: \(p_{\omega }\): “The probability that the vaccine has negative side effects is at most \(\omega \)” The vaccine producer might believe that \(p_{0.1}\) is true but would prefer, in order to remain below the maximum tolerated level of health risk, to convince the hearer that \(p_{0.05}\) is true.

  2. 2.

    A second important case is one where the speaker is a pure epistemic agent, but has different validation standards than the hearer. Consider a climate scientist reporting on the impact of a given global temperature increase on GDP. She has to state a proposition such as: \(q_{\omega }\): “A 2\(^{\circ }\)C increase in temperature will lower the current global GDP by \(\omega \) percentage points” Such predictions rely on a great number of prospective simulations, all of which differ in the underlying hypothesis or the mechanisms accounted for (see for instance those reported in Meinshausen et al. (2009)). These simulations can be ranked by a continuous confidence level, for instance by a measure of its predictive accuracy. How accurate a simulation has to be to be considered reliable might differ between speaker and hearer. For instance, it might be known that the speaker is more cautious than the hearer and would consider that even less accurate simulations should be considered. This would cause her to support a more pessimistic proposition. That is, the speaker would report \(q_{\omega }\) while the hearer, in her position, would have reported \(q_{\omega ^{\prime }}\), where \(\omega > \omega ^{\prime }\).

In both these cases, is it always unjustified for the hearer to believe any speaker’s statement about mRNA vaccines or climate change? Answering yes to these questions implies that a variety of conspiracy theories cannot be avoided by testimony alone. I do not share this intuition. In my view, in the cases described above we are justified to hold at least some belief regarding the COVID-19 vaccine side effects, or the effect of climate change on GDP, based on testimony alone. Yet a theory capable of explaining what belief we are justified to hold and why testimony can provide this justification despite the epistemic conflict of interest is necessary, but still lacking.

3 Game theory for social epistemology

3.1 Epistemic agents

Traditionally, game theory has been used to study strategic interactions between agents concerned with pragmatic criteria. Most famous examples such as the prisoner’s dilemma or coordination games are illustrated by players concerned with escaping prison (Poundstone, 1993) or choosing a Saturday night leisure activity (Luce & Raiffa, 1989). Yet Joyce (1998) successfully argued that utility functions should also be used to capture purely epistemic motivations of agents concerned partially or fully with learning the truth. This program has been pushed further in many directions (Pettigrew, 2016). While most of this research has been oriented towards decision theory (Bradley, 2017; Zollman, 2021) made a particular case in favour of game theory. He argued that agents interested in epistemic values can find themselves involved in game theoretical situations, even when they have similar interests.

In his paper, Zollman considers agents who are epistemically altruistic, that is, who care only about reaching the truth. From a formal perspective, this is captured by utility functions called scoring rules. My starting point will be similar to Zollman. Let me introduce a slightly more general formal frameworkFootnote 6 to illustrate my point. Let \(\Omega = [0,1]\) be the set of possible states. The set of states represents all the possibilities agents agree upon. The agent is unsure about the true state and holds a credence function over the possible states, formally: \(C \in \Delta (\Omega )\). Let c be the probability density of C over \(\Omega \).Footnote 7 Let’s call \(b_i \in [0,1]\) agent i’s belief and assume that, for him, this is the state expected to be the true one. This approach to the connection between belief and credences is very close to Weatherson (2016). For our agent, his belief \(b_i\) is his action variable. An epistemically altruistic agent’s utility function is maximised when his belief corresponds to the true state and, the more distant it is from that state, the more it decreases. A simple way of illustrating this is by the following utility function:

$$\begin{aligned} u_i(b_i,\omega ) = -(b_i - \omega )^2 \end{aligned}$$

where \(\omega _0 \in \Omega \) is the true state. Assume our agent is Bayesian, that is that he conforms himself to expected utility theory (Savage, 1972). When asked what the single most likely state is, our agent answers the question by maximising b in the following function:

$$\begin{aligned} {\mathbb {E}}(u_i(b,\omega )) = \int _{\omega \in \Omega } c(\omega )u_i(b,\omega )d\omega \end{aligned}$$

In other words, an agent purely concerned with finding the truth will maximise the expected utility of the scoring rule \(u_i\), given his credence, in order to report which single state is, for him, the most likely. \(u_i\) is thus called a proper scoring rule as it maximises the expected epistemic utility function by honestly announcing his belief b instead of an alternative one \(b^{'}\).

Now, let’s go back to our speaker/hearer problem. Assume both are epistemic agents. The hearer is a purely altruistic epistemic agent, concerned only about the truth in the manner we presented up to now. He holds credence c which leads him to belief b. The speaker is an expert in some subject, and knows the true state. She aims to inform the hearer about it and cares about his belief b. Yet, she is not purely altruistic in the sense that her expected epistemic utility function is maximised when \(b = \omega + m\), where \(m \ne 0\) is called the misalignment factor. For simplicity, let us assume her utility function is the following:

$$\begin{aligned} u_S(b,\omega _0) = -(b - \omega _0 - m)^2 \end{aligned}$$

where \(\omega _0 \in \Omega \) is the true state.

3.2 Nash equilibrium

Consider a case where agents have an epistemic conflict of interest and thus that \(m\ne 0\). What will happen when such a speaker and hearer interact? In particular, is the speaker’s claim a strong enough justification for the hearer to believe it?

Of course, it might be that, whatever she reports, our speaker can provide some external proof that she is not lying to the hearer. For instance, researchers can provide statistical evidence or mathematical proofs supporting their claims, and if the hearer is able to understand them, he can check the results by himself. But then we have left the domain of anti-reductionist theories: ultimately the justification relies on logic and not on testimony. The interesting case for us is the one where justification relies only on the speaker’s word.

So let us consider a case where the speaker reports about the state to the hearer, with no other argument in favour of his claim. If both agents are only maximising their utility function, I claim that the hearer will never be entitled to the same belief as the speaker. For the sake of the example let us assume this misalignment is due to different validation standards (example 2 above). The hearer is aware of the speaker’s validation standards and thus of the misalignment of m. So, if the latter decides to report sincerely and state \(q_{\omega }\), the hearer will believe that, given that the speaker is maximising her utility and would thus want to induce a belief which is m away from the true belief, \(q_{\omega + m}\) is true. But then, the speaker could anticipate the reaction of the hearer and, in order to maximise her epistemic utility function, report \(q_{\omega - m}\). First, note that in doing so, the speaker gives up on sincerity in the hope of inducing the right belief, given her standards. Will it work? Most probably not. Following the same logic, by hearing \(q_{\omega - m}\) the hearer can anticipate the behaviour of the speaker and consider that \(q_{\omega + m}\) might be true, or even \(q_{\omega + 2m}\). In fact, once speaker and hearer are trapped in this corrective logic, it is impossible to deduce from the speaker’s word what she really knows.

Let us call this reasoning a case of infinite correction. By only assuming that agents are maximising their utility, situations of epistemic conflict of interest such as I have described lead to dead ends, such as infinite correction. In other words, the speaker has an incentive to lie and the hearer knows about it. The latter is thus clearly not justified to hold beliefs based on the former’s claims. Should we reject justification through testimony under epistemic conflict of interest in general, as argued until now in the literature (Sect. 2)?

I believe we should not. One way of exiting this dead end is to assume more regarding both agents’ behaviour. Namely, I will assume their choice of strategies in the game I consider conforms to the logic of the Nash equilibrium. In the context of our problem, the Nash equilibrium translates as followsFootnote 8:

Nash equilibrium: The hearer is entitled to believe a proposition conveyed by the speaker if, given the resulting belief of the hearer, the speaker has no interest in conveying another proposition.

Let us unpack this proposition carefully. What matters is that an agent will always decide given the reaction he can anticipate from the other. This is exactly why the speaker chooses not to say \(q_{\omega }\) but \(q_{\omega - m}\) and so on. By choosing precise claims such as \(q_{\omega }\), anticipating the other’s reaction is what led to the unstable regression presented above. Yet if the speaker had no interest in entering the infinite correction logic in the first place, the Nash equilibrium principle tells us that her claim will be justified in the speaker’s ears. It would naturally follow that the hearer would be justified to believe the speaker’s claim. Is this possible in the examples we are considering? The point we made in our example shows that it is not when the speaker restricts herself to propositions of the type \(q_{\omega }\), with \(\omega \in \Omega \). Precise claims are always out of equilibrium claims in our game and fail to lead to belief justification. Yet, in the following, I will show that the hearer can possess justification for his belief for a more general class of propositions. But before moving on it seems important to address an fundamental concern: why do we need to introduce the Nash equilibrium into the picture? Are we justified to do so?

In most cases, one cannot predict the outcome of a game without assuming more about the joint behaviour of players—as the infinite regression case illustrates. Although the Nash equilibrium is the most popular one among game theorists and economists, many others exist. Fourny (2020) offers a comprehensive overview and discusses their respective normative appeal. In particular, the Nash equilibrium has often been considered as capturing a very selfish conception of social interactions and its normative attractiveness has been challenged. I would not go as far as saying that our agents ought to interact following its logic. However, in many situations, the Nash equilibrium has a strong positive appeal. In particular, in the context of the class of games we are interested in here (generally called cheap-talk games in the game-theory literature), experimental studies such as Battaglini and Makarov (2014) have given evidence in favour of the Nashian logic. It seems that, in practice, subjects, when confronted with situations similar to the ones I described above, behave according to the predictions of the Nash equilibrium. So, assuming this equilibrium concept in order to be able to predict the outcome of the game we consider has positive ground for its support. The claim about rationality this paper aims to make is not about how agents should behave when facing strategic situations. I take this aspect as given by empirical evidence. The normative aim of this paper is to establish what agents are justified to believe, when their only source of justification is testimony.

Relying on the Nash equilibrium as a warrant for justified belief is shifting the weight of justification from potential evidences the hearer can obtain through perception of logic to an equilibrium behaviour. The latter puts some behavioural structure on the strategic setting both speaker and hearer find themselves in.

3.3 Bayesian updating at equilibrium

Until now we have restricted ourselves to the class of proposition \(p_{\omega }\), with \(\omega \in \Omega \). As we saw with the infinite correction case, as long as there is some misalignment, even if the speaker knows the state, in our problem, the hearer is never justified in believing \(p_{\omega }\). But what if we could consider any class of proposition of the form \(p_{S}\), where S is any subset of \(\Omega \)?

For the speaker to be able to state some proposition \(p_{S}\) as being part of a Nash equilibrium, it must be that, given the resulting belief of the hearer, the speaker has no interest in conveying another proposition than \(p_{S}\). \(p_{S}\) would then be called an equilibrium message. But we still need to specify one element in our definition. Given an equilibrium message \(p_{S}\), what should be the resulting credence of the hearer? Given that our hearer is Bayesian, I will make the natural assumption that he will update c, his credence function over \(\Omega \), following Bayes’ rule. That is: being justified to believe that the state is in S, he will simply re-scale his credence from \(\Omega \) to S. Formally, this is:

$$\begin{aligned} c(\omega |p_{S}) = {\left\{ \begin{array}{ll} \frac{ c(\omega )}{\int _{\omega \in S} c(\omega )d\omega } \text { if } \omega \in S \\ 0 \text { if not}. \end{array}\right. } \end{aligned}$$

The hearer then maximises his expected utility function \(\int _{\omega \in \Omega } c(\omega |p_{S})u_H(b,\omega )d\omega \) in function of b. On the contrary, if the hearer is told a proposition \(p^{'}\) which is not an equilibrium message (as in the infinite correction example), it seems fair to assume that he will ignore it and stick to his credence.Footnote 9

Notice that a consequence of assuming that both parties follow the Nash equilibrium logic is that once they reach equilibrium, strategies are known to all. This means in particular that an equilibrium claim of the speaker can not be deceiving, in the sense that \(\omega _0\) will always be in the support of \(c(\omega |p_{S})\). However, hearer and speaker can end up with different beliefs, given that the former one’s if only \(argmax_{b \in \Omega } \int _{\omega \in \Omega } c(\omega |p_S)u_H(b,\omega )d\omega \).

Now that all elements necessary to express my claim have been defined, let me explicitly formulate the version of anti-reductionism I support in this paper.

Equilibrium anti-reductionism

“A hearer is entitled to the belief \(b \in \Omega \) induced by a speaker’s testimony alone if and only if it follows from an equilibrium communication process.”

It is important to note that Equilibrium Anti-Reductionism does not posit that the hearer is entitled by default to believe the propositional content of the speaker’s claim, but only the induced belief acquired at equilibrium. Thus, Equilibrium Anti-Reductionism relays heavily on the three premises I defended in this section:

  1. 1.

    The definition of belief and its connection with credence I assumed in 3.1.

  2. 2.

    The Nash equilibrium logic and the underlying behavioural beliefs I described in 3.2.

  3. 3.

    Bayesian updating I described in the current sub-section.

In the following I will explain why these assumptions lead to Equilibrium Anti-Reductionism.

4 Justification at equilibrium

A game with the same mathematical structure, but with a different interpretation from the one I described up to now has been studied in the game theory literature in Crawford and Sobel (1982). Their first result, Lemma 1, is of great interest for our discussion. I will rephrase it here:

Lemma 1

(Crawford & Sobel, 1982) The game has a finite number of equilibria, all of which are partitional.Footnote 10

What is a partitional equilibrium? Figure 1 illustrates the structure of such an equilibrium. In this example, if the true state is in \([0,\omega _1]\), the speaker claims message \(p_{[0,\omega _1]}\) while if the state is in \([\omega _1,1]\), she sends message \(p_{[\omega _1,1]}\).

Fig. 1
figure 1

Example of a 2 cut-off equilibrium

Notice that what Lemma 1 implies is that there is partition of \(\Omega \) in a finite number of intervals and that at equilibrium, all elements of that interval would send the same message. It follows that at equilibrium the hearer knows exactly in what interval of the partition the hearer’s belief \(\omega _0\) is. However, nothing imposes that the propositional content of equilibrium claim has to be truthful. In a two interval partition we are considering here, which is separated in \(\omega _1\), the equilibrium claims can perfectly be “all rationals in [0, 1]” for all states below \(\omega _1\) and “all irrationals in [0, 1]” for states above. At equilibrium, the hearer will still be able to identify the interval of \(\Omega \) that made a given claim and rescale its credence on that interval.

Yet, in practice, it is without loss of generality to consider only claims whose propositional content is the equilibrium interval—here “\([0, \omega _1]\)” and “\([ \omega _1,1]\)”—as they will induce exactly the same beliefs as less natural claims such as those I presented above.Footnote 11 Thus, if one assumes the speaker will use claims whose propositional content is the equilibrium interval, the hearer is justified in conditionalising his credence over the propositional content of the speaker’s claim.

In principle, multiple equilibria can exist, each of which is characterised by a finite number of states called cut-off states—\(\omega _1\), in Fig. 1. For instance, there can also be a 4 cut-off equilibrium, where more propositions can be justified (see an illustration in Fig. 2).

Fig. 2
figure 2

Example of a 4 cut-off equilibrium

In the words of game theorists, the main consequence of Lemma 1 is that information transmission is always imprecise. In the words of epistemologists, this means that the hearer will never be justified through testimony alone to hold credence 1 that the state is \(\omega _0\). For instance, in the examples above, if the speaker believes in state \(\frac{\omega _1}{3}\), the hearer will only have the true,Footnote 12 Nash-justified credence that the state is in \([0,\omega _1]\) and, if his prior credence is uniformly distributed, believes that the state is \(\frac{\omega _1}{2}\).

There is always at least a 1-cut-off equilibrium where whatever the true state, the speaker sends the same message (and the cut-off then is \(\omega _1=1\)): \(p_{\Omega }\). We mentioned this proposition earlier, and noticed it was fully uninformative. Why is it an equilibrium message? Because clearly the speaker cannot be lying when saying something the hearer already knew. So she could always say \(p_{\Omega }\) and be believed by the hearer. The more cut-offs an equilibrium has, the more precise the propositions the hearer is justified to believe.

Why are these communication strategies part of an equilibrium? Let us take the example of the 2 cut-off equilibrium of Fig. 1, where \(\omega _1\) is the only cut-off in (0, 1). Here, the speaker has the choice between stating two propositions which will be believed by the hearer: \(p_{[0,\omega _1]}\) and \(p_{[\omega _1,1]}\). Any other proposition will be ignored. For this to be an equilibrium, it must be that all states below \(\omega _1\) prefer to separate themselves from the states above it. That is: if the state the speaker believes in is below \(\omega _1\), the speaker prefers to claim \(p_{[0,\omega _1]}\), while if that state is above \(\omega _1\), proposition \(p_{[\omega _1,1]}\) will be the preferred claim.

If the speaker claims \(p_{[0,\omega _1]}\), Bayes’ rule will cause the credence of the hearer to re-scale over \([0,\omega _1]\), leading to a belief \(b_{[0,\omega _1]}\)Footnote 13 which is necessarily in this interval. Similarly, if \(p_{[\omega _1,1]}\) is claimed, the belief of the hearer, \(b_{[\omega _1,1]}\),Footnote 14 will be in \([\omega _1,1]\). Yet, because we assumed that epistemic utility functions are scoring rules, they possess a unique maximum and strictly increase for beliefs before that maximum and strictly decrease for beliefs after it. Therefore, it can be that if the true state is \(\omega _1\), the speaker is exactly indifferent between the hearer believing \(b_{[0,\omega _1]}\) and \(b_{[\omega _1,1]}\). Figure 3 illustrates how this can be possible. A direct consequence is that if the true state is below \(\omega _1\), the speaker strictly prefers claiming \(p_{[0,\omega _1]}\) than \(p_{[\omega _1,1]}\) and vice versa if the true state is above \(\omega _1\). Thus, both messages conform to the Nash principle: the speaker can do no better than using them as we prescribed.

Fig. 3
figure 3

Identifying cut-off \(\omega _1\)

One important thing to notice is that the existence of cut-off states is dependent on the misalignment. Assume that, in our climate scientist example, the misalignment is very important, for instance \(50\%\). This means that whatever the effect of a 2\(^{\circ }\)C increase in temperature on the current GDP, the climate scientist wants the hearer to believe it is 50 percentage points below what she really believes it to be. Recall it is assumed that this number is between 0 and 100. For there to be a cut-off state, there must be a percentage such that the speaker is indifferent between the belief induced by saying the effect on GDP is below that level and above it. Clearly, given the misalignment and as the minimal level is \(0\%\), that level has to be above \(50\%\). In addition, given the misalignment is so strong, whatever the state \(\omega \) between 50 and 99, the speaker will always strictly prefer to induce a belief as low as possible. So \(p_{[\omega ,1]}\) is not an equilibrium message and there is no 2 cut-off equilibrium. The smaller the misalignment, the more cut-offs can exist and at the limit, when m is close to 0, the speaker can almost be perfectly precise.

Because the hearer is only interested in the truth, he would always like to know more about the state. Interestingly, despite misalignment, the speaker would also like to be perfectly precise about his belief.Footnote 15 Imagine, as a thought experiment, that before becoming an expert, the speaker could commit herself to always speak the truth about what she will learn. Then, if you average her epistemic utility over all the possible states she could learn,Footnote 16 she would be better off than in any other partitional equilibrium agents would end up in without commitment. In other words, the weakening of justification due to the misalignment is a strategic effect which happens despite both agents’ epistemic interest. It is entirely a consequence of the environment they are in. This result goes further on, even without commitment. If the speaker could choose a communication strategy before learning the state, she would always choose the one corresponding to the equilibrium with most cut-offs. That is, she would want to be as precise as she could be, in a certain sense.

5 Credence imprecision and belief accuracy

The theory I proposed offers a game theoretical foundation for testimony-based credence justification. Given the equilibrium claim \(p_S\) made by the speaker, the hearer is justified in adopting the credence \(c(|p_S)\). In addition, given what I assumed regarding the connection between credence and beliefs, it also follows that the hearer is justified in believing \(b_S\), where

$$\begin{aligned} b_{S} = argmax_{b \in \Omega } \int _{\omega \in \Omega } c(\omega |p_S)u_H(b,\omega )d\omega \end{aligned}$$

Note that this specific connection between credence and beliefs it is not necessary to support my claim. As my theory provides a justification for testimony-based credences, it is sufficient to assume that beliefs can be reduced to credences, as done for instance in Sturgeon (2008), Foley (2009) or Leitgeb (2013), to further obtain a justification for testimony-based beliefs.

However, the precision of the credence obtained through equilibrium communication plays an important part in the accuracy of the resulting belief of the hearer. Belief in my settings are elements of \(\Omega \subset {\mathbb {R}}\). Therefore I will measure the accuracy of belief \(b \in \Omega \) through the natural euclidean distance to the true state: \(|b-\omega _0|\).Footnote 17 By credence precision I mean the size of the interval of states making the claim \(p_S\) at equilibrium. Formally, call \(S= [l,u]\) an interval of \(\Omega \) such that there is an equilibrium strategy \(\sigma \) where \(\sigma (\omega ) = p_S\) for any \(\omega \in [l,u]\). The size of S is \(|u-l|\).

First, note that the speaker’s posterior credence can never be perfectly accurate. This is because claims of the type \(q_{\omega }\), where \(\omega \in \Omega \) are always out of equilibrium claims in our game. In other words, they are not justified to be believed by the hearer, even if the speaker is telling the truth.Footnote 18

Equilibrium claims are always of the form \(q_{I}\), where I is an interval of \(\Omega \) containing the true \(\omega \). The hearer is thus always justified to hold a less precise posterior credence then the speaker. But this has consequences on the accuracy of the hearer’s resulting belief.

Lemma 2

On average, the smaller the epistemic conflict of interest, the more accurate the belief of the hearer.

Proof of Lemma 2:

Assume the equilibrium with the most cut-off in the game has n cut-offs. I will show that in that equilibrium, on average, the smaller the epistemic conflict of interest, the more accurate the belief of the hearer. The same logic applies for any other equilibrium.

Notice that whatever \(\omega _0 \in \Omega \), \(p_S(\omega _0)\) the corresponding equilibrium claim of the speaker which will induce a belief \(b_S\), is a function of m. A formal rephrasing of Lemma 2 is to say that the expected distance between the optimal action for the speaker \(\omega _0\) and the equilibrium action of the hearer \(b_S\) is decreasing in m. Yet, given that the expected utility of the hearer is a proper scoring rule, the expected distance between \(\omega _0\) and \(b_S\) is decreasing in m if ans only if the expected utility of the hearer is decreasing in m. It also follows from Crawford and Sobel (1982, Eq. (25)) that the expected utility of the hearer is:

$$\begin{aligned} -\frac{1}{12} -\frac{m^2(n^2-1)}{3} \end{aligned}$$

which is decreasing in m. \(\square \)

What Lemma 2 shows is that the smaller the epistemic conflict of interest, the more accurate, on average, the credence of the hearer. This is because the smaller the epistemic conflict of interest, the more precise, on average, the credence of the hearer. Thus, credence imprecision gives a weaker evidential support to the hearer’s belief, leading, on average, to a less accurate belief then under reduced epistemic conflict of interest.

Notice that this result is only valid on average. It can be that for a given state, a less biased speaker induces a less accurate belief then a more biased speaker. It can even be that for some state, a very biased speaker induces the hearer to the true belief. But given the continuous nature of our setting, this happens with 0 probability. On average, epistemic conflict of interest leads to more imprecise (but credible) claims which themselves lead to less accurate beliefs.

Dang and Bright (2021) show that in practice, 20th century scientific research has often sacrificed the norm of accuracy for the sake of credibility—as I argue here. This result also has a flavour to those obtained by O’Connor (2014) regarding vagueness in predicates and in particular in natural languages. While O’Connor shows that they arise as a result of quick and successful signalling conventions, I show that they are the result of a quest for strategic compatibility for the speaker and justification for the hearer. A similar point has been made by Mayo-Wilson (2014) who builds on a formal approach to show that the degree of insularity of scientific communities has a major impact on their credibility regarding non-expert communities under epistemic conflict of interest.Footnote 19 Although formally different as Mayo-Wilson builds on networks and not on communication models, his result conceptually parallels the one I derived regarding the impact of misalignment on the accuracy of equilibrium belief.

6 Discussion

Even under epistemic conflict of interest, I offered reasons to think that the hearer can still be justified to trust the speaker and to build belief upon her words. But this can happen only in very specific situations and only for a certain class of beliefs.

Equilibrium in practice Justification is only granted at equilibrium. Two things are to be noted regarding equilibrium behaviours.

First, acting according to the Nash equilibrium logic is the result of beliefs about other players behaviour: all players’ must believe that other players will act as such.Footnote 20 Then, these behavioural beliefs will act as a self-fulfilling prophecy, leading the speaker’s claim to conform to the Nashian logic, and thus justifying the hearer to hold propositional beliefs such as \(q_{[\omega _1,\omega _2]}\). As argued by Voss (2001) or Bicchieri and Sontuoso (2020) the adoption of a social norm can be seen as resulting from that kind of behavioural belief. The idea has notably been applied by Lewis (2008) in a sender-receiver game similar to the one studied here to suggest that the emergence of languages results from an equilibrium behaviour.Footnote 21 The same idea has been widely defended by proponents of evolutionary game-theory (Smith & Price, 1973), for whom collective norms appear in the long run as a result of an equilibrium process.

Second, it is important to note that social norms, just like equilibrium behaviour, take time to establish themselves. In a social context, agents try different speaking and hearing strategies until their joint behaviour stabilises, because no epistemic benefit can be gained by a further change. As a result, in situations which have not reached their equilibrium state, belief justification fails. This would be the case for new or unexpected events, such as the early stages of the COVID-19 pandemic, where the settings of the game are still not well understood by all players. Claims made by experts by that time still had not reached equilibrium level. This observation shows how important scientific norms and practices are for enforcing the credibility of their claims. Because scientific norms are the result of equilibrium behaviour, they provide justification to an audience of laypeople to rely on expert advice. In the same line as Zollman (2019), this result gives another reason to understand why deviation from scientific good practices, such as scientific fraud, is extremely detrimental to Science’s credibility. By drawing a link between equilibrium behaviours and collective norms, this paper also aligns with Gerken (2015, 2022) regarding the importance of scientific norms in the context of intra-scientific testimony and in expert/laypeople communication.

Similar approaches To my knowledge, two similar formal approaches to the Source Problem have been proposed in the epistemological literature: Faulkner (2011) and Duijf (2021).

While, similarly to the analysis defended in the present paper, Faulkner agrees that the SP is essentially a problem of cooperation (or a game-theoretical problem), he concludes against AR:

[S]peakers and audiences have different interests in communication[...] Our interest, qua audience, is learning the truth. Engaging in conversations as to the facts is to our advantage as speakers because it is a means of influencing others: through an audience’s acceptance of what we say, we can get an audience to think, feel, and act in specific ways. So our interest, qua speaker, is being believed...because we have a more basic interest in influencing others...[T]he commitment to telling the truth would not be best for the speaker. The best outcome for a speaker would be to receive an audience’s trust and yet have the liberty to tell the truth or not. (2011, pp. 5–6)

The main difference between Faulkner’s approach and mine is that Faulkner does not assume any equilibrium concept. Therefore, his reasoning essentially takes place out of equilibrium. It is thus natural that we agree on the fact that the hearer is not justified in believing \(q_{\omega }\) type claims. But assuming behavioural beliefs and thus looking at the Nash equilibrium of the game enables me to grant justification for certain classes of beliefs.

A second reason for these differences is that Faulkner’s premises are slightly different from mine. Although we agree that the default position for the speaker is to promote her own interest, Faulkner also considers that her interest is to be believed. This latter assumption is at odds with the equilibrium logic. In the long run, if being believed was the interest of the speaker regardless of anything else, she could do no better than simply saying the truth. In addition it seems to me that, in situations such as those I’ve listed above, the speaker’s interest goes beyond the mere aim of being believed.

Another paper close to mine is Duijf (2021). Duijf considers an expert/layperson communication model where the former informs the latter regarding the consequences of a given decision. The expert has varying degrees of expertise (captured by the probability that he knows the state) and there are varying degrees of misalignment between parties (captured by a commonly known probability). Duijf examines the delegation question: when is it rational for the layperson to delegate an action to the expert and when is it not? Although an important question, it is distinct from the one I’m interested in in the current paper: when is it rational for the hearer to hold a proposition conveyed by a speaker for knowledge? Interestingly though, our results have a similar flavour: it must be that the misalignment between speaker and hearer is not too wide.Footnote 22

An alternative approach to the problem of expert reliability builds upon the concept of reputation. The speaker is justified to believe the hearer in virtue of the former’s reputation, or, in other words, in virtue of the repeated observation that the speaker is saying the truth. In the words of game theorists, one should consider these situations as repeated games and not one-shot ones—as I did in the present paper. While I agree that, in many situations, this argument is compelling to insure the justification for knowledge, it deals with situations which are beyond the anti-reductionist realm. Repeated observations of the conjunction between the speaker’s claims and reality cannot be made without perception or logic. I believe that anti-reductionist is precisely relevant for cases where reputation cannot play a part and where the decision to trust has to be done in one-shot. These cases can be of prime importance: how can one assess the reliability of climate scientists regarding (b) when it comes to describing a +5\(^{\circ }\)C planet? Cases of misalignment can exist, such as differences in validation standards (Lloyd et al., 2021), moral disagreements (Gundersen, 2020) or publication incentives. How can one assess the reliability of epidemiologists regarding (b), when assessing the long-term effects of mRNA-based vaccines? Similar misalignment incentives can easily be imagined.

7 Conclusion

Can a hearer be rationally justified to have beliefs based on testimony alone when there is a known epistemic conflict of interest with the speaker? While this question has been mainly discarded by the literature, I have argued that it is essential for many practical situations. In particular, being able to provide a theory that supports a positive answer to it is essential to support modern production of scientific knowledge. On the basis of a game-theoretical approach, I have shown that epistemic dependence under epistemic conflict of interest is possible, but only at equilibrium. Equilibrium behaviours appear in the long run, and are embodied by scientific norms. This result contributes to highlighting the—already stressed—importance of scientific norms for Science’s credibility. Moreover, even at equilibrium, the belief the hearer will be justified to hold will be on average less accurate, than the speaker’s original belief.