Skip to main content
Log in

Why so negative? Evidence aggregation and armchair philosophy

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

This paper aims to clarify a debate on philosophical method, and to give a probabilistic argument vindicating armchair philosophy under a wide range of plausible assumptions. The use of intuitions by so-called armchair philosophers has been criticized on empirical grounds. The debate between armchair philosophers and their empirical critics would benefit from greater clarity and precision in our understanding of what it takes for intuition-based approaches to philosophy to make sense. This paper discusses a set of rigorous, probability-based tools for determining what we can and cannot learn from intuitions in various conditions. These tools can tell us whether beliefs can be justified by armchair practices, and what empirical findings would have to show to undermine the use of intuitions in philosophy. Using these tools, the paper shows that armchair philosophy makes sense in a broad range of situations, and that it is quite plausible that we are in those situations at the moment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. This introduction owes a lot to an anonymous referee who articulated what this paper intended to do much better than I could on my own.

  2. Thank you to Tyler Hildebrand for helping me clarify this idea.

  3. See Goldman (2010) for an instance of a philosopher discussing the idea that we might apply the Theorem to intuitions.

  4. Throughout this paper I only discuss those who have a certain intuition and those who have a contradictory one. Sometimes, however, we find that one has no intuition on a topic at all. If so, what should this tell us? A fairly common response seems to be to treat this as no evidence at all. However, I have also seen philosophers respond to an absence of intuitions about a proposition as a sign that the proposition is implausible—in other words, treating the absence of intuitions that \(h\) as akin to intuitions that \(\sim \) h. Either of these two approaches is already easily modeled in the approaches I discuss above, but future research might suggest that some other alternative is more appropriate.

  5. My thanks to Michael Tooley for this example.

  6. One might think that what makes an intuition the intuition it is is the proposition that is intuited. That is, the intuition that \(p\) couldn’t have been the intuition that \(\sim \) p. If that is your view, then the notion of reliability I use here doesn’t make sense. It can be replaced with something like: reliability is the probability that a certain event or moment which contains an intuition will contain the intuition that \(p\), given that \(p\) is true. This doesn’t affect any of the arguments I make in this paper.

  7. Fred can have intuitions that are implausible to him because intuitions need not be believed. He might have very good, intuition-independent, reasons to think that the intuited proposition is false. Or he might have many other intuitions that suggest that the intuition is false. E.g. Fred might intuit that there are more integers than even integers despite being aware of proofs that this is false.

  8. My thanks to an anonymous referee for helping me to properly articulate this formulation.

  9. Since reliability values are probabilities, they can take any real number between 0 and 1 as their value. So I am really describing a method for closely approximating the probability that \(p\); see “Appendix B”.

  10. Note that causal dependence is only worrisome, for our purposes, when it holds between two intuitions in the set of intuitions one is using as evidence about a given proposition. If you gather my intuition that \(p\), and my intuition that \(p\) is dependent on Fred’s, this doesn’t matter if you don’t also gather Fred’s intuition. This is because dependence is bad (for our purposes) when it reduces the information that an intuition gives us. This only occurs when intuitions in the set gathered are dependent on each other, as some will be redundant given the other; dependence on non-members of the set gathered doesn’t create this redundancy.

  11. One thing to note here is that the credence we can achieve through 75 % agreement among twenty intuitors will not be the same as the credence we can achieve through 75 % agreement among a different number of intuitors. This is because the more intuitors we consult, the more we learn about the reliability or dependence values of intuitions, which will affect what we learn about the propositions intuited.

  12. For example, in the previous section I briefly mention the breaking point for an intermediate form of dependence. This was .15, which is quite high, since this is like symmetric dependence in that each intuition in the majority has a chance of determining the content of each other intuition.

  13. One should also note that inductive generalization from empirical research on intuitions is already a bit risky. The intuitions that get studied and the data that gets reported are not randomly selected, but instead will tend to be selected for their surprising nature.

  14. Michael Huemer has argued that it is reasonable to treat one’s intuitions as more reliable than those of others in some situations (Huemer 2011). However, on his view this is only reasonable when one lacks evidence about the reliability of others; once one realizes that humans, or humans of certain types, generally share cognitive capacities, one has such evidence.

  15. As philosophers are notoriously skeptical of a posteriori arguments, doing empirical criticism of armchair philosophy if a priori arguments against intuitions were available would be making one’s life unnecessary difficult. So charity suggests that empirical critics of armchair philosophy either do not have a priori arguments against the use of intuitions, or see these as insufficiently strong.

  16. Another aspect of my approaches to aggregating intuitions is that I treat all of the intuitions elicited as equally reliable. Of course, they probably are not. However, we typically won’t know just how reliable each of our individual colleagues are. So we should calculate using the expected reliability of each of our intuitions, and the expected value of a normally distributed variable just is the mean value of the variable (Ross 2007).

  17. This concern was raised by two anonymous reviewers.

  18. Huemer acknowledges this in a footnote, but doesn’t discuss its ramifications for intuitions in philosophy.

  19. This might not be entirely true: we might ask someone, “What ethical theory is intuitively true?” and there are more than two possible answers here. But I don’t think that this is a model for typical appeals to intuition, which are about the truth or falsity of single propositions. When we present thought experiments to people, we rarely ask, “What is your intuition here?” without specifying what question the intuition is about.

  20. My thanks to an anonymous referee for bringing up this point.

  21. Some might think that philosophers should not employ methodology that is not actually or necessarily truth conducive, even if such methodology is rational in light of what one reasonably believes; this concern was raised by one anonymous reviewer. This concern seems to reflect debates between Bayesians and classical statisticians about the use of necessarily truth conducive methodology versus the “confirmation conducive” methodology of Bayesian epistemology. I have hoped to set this sort of debate aside and focus on what makes our beliefs rational, an approach widely accepted in philosophy. My thanks to Hanti Lin for help in understanding this worry.

  22. Please note: the following equation should also consider the number of combinations of ways to get \(f\) and \(a\) intuitions that \(P\) and \({\sim }P\). However, this will cancel out in later equations, so I do not consider it here.

References

  • BonJour, L. (1998). In defense of pure reason. Cambridge: Cambridge University Press.

    Google Scholar 

  • de Condorcet (1785). Essai sur l’application de l’analyse a la probabilite des decisions rendues a la pluraite des voix. http://gallica.bnf.fr/ark:/12148/bpt6k417181/f4.image. Accessed 18 June 2012.

  • Goldman, A. (2010). Philosophical naturalism and intuitional methodology. Proceedings and Addresses of the American Philosophical Association, 84(2), 115–150.

    Google Scholar 

  • Holder, R. D. (1998). Hume on miracles: Bayesian interpretation, multiple testimony, and the existence of god. British Journal for the Philosophy of Science, 49(1), 49–65.

    Article  Google Scholar 

  • Huemer, M. (2001). Skepticism and the veil of perception. Lanham: Rowman & Littlefield.

    Google Scholar 

  • Huemer, M. (2008). Revisionary intuitionism. Social Philosophy and Policy, 25(1), 368–392.

    Article  Google Scholar 

  • Huemer, M. (2011). Epistemological egoism and agent-centered norms. In T. Dougherty (Ed.), Evidentialism and its discontents (pp. 17–33). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Ladha, K. K. (1992). The condorcet jury theorem, free speech, and correlated votes. American Journal of Political Science, 36(3), 617–634.

    Article  Google Scholar 

  • L’Ecuyer, P. (2012). SSJ : A java library for a stochastic simulation. http://www.iro.umontreal.ca/~simardr/ssj/indexe.html. Accessed 20 June 2012.

  • Lewis, C. I. (1946). An analysis of knowledge and valuation. LaSalle: Open Court Publishing Co.

  • Olsson, E. J. & Shogenji, T. (2004). Can we trust our memories? C.I. Lewis’s coherence argument. Synthese, 142, 21–41.

  • Ross, S. M. (2007). §2.4 Expectation of a random variable. In Introduction to probability models, (9th ed.). Burlington: Academic Press.

  • Schum, D. A., & Martin, A. W. (1982). Formal and empirical research on cascaded inference in jurisprudence. Law & Society Review, 17(1), 105–152.

    Article  Google Scholar 

  • Sosa, E. (1998). Minimal intuitions. In M. DePaul & W. Ramsey (Eds.), Rethinking intuitions. Lanham: Rowman & Littlefied.

    Google Scholar 

  • Van Cleve, J. (2011). Can coherence generate warrant ex nihilo? Probability and the logic of concurring witnesses. Philosophy and Phenomenological Research, 82(2), 337–380.

Download references

Acknowledgments

I owe a great deal of thanks to Julia Staffel, Kenny Easwaran, Dom Bailey, Chris Heathwood, Eric Chwang, Tyler Hildebrand, Adam Keeney, Michael Tooley, Hanti Lin, and Christian Lee for helping me to sort out my arguments and ideas. Thank you also to Jonah Miller for his suggestions on some of the math. This paper also greatly benefitted from anonymous journal referees, whose feedback on previous versions pushed me to really work out the material in this final version.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brian Talbot.

Appendices

Appendix A: Aggregating independent intuitions

In this section I show that, if intuitions are independent, the absolute sizes of the groups intuiting some proposition and its negation are irrelevant to the epistemic probability one should assign to the proposition. What matters instead is the difference in size between these two groups.

If we have some proposition \(P\), and some evidence \(E\) which consists of intuitions for and against \(P\), we can calculate the credence we should assign to \(P\) via Bayes’ theorem:

$$\begin{aligned} \Pr \left( {P|E} \right) =\frac{\Pr \left( P \right) \Pr (E|P)}{\Pr \left( P \right) \Pr \left( {E|P} \right) +\Pr \left( {\sim }P \right) \Pr (E|{\sim }P)} \end{aligned}$$
(1)

Let’s define some useful variables:

\(f\) :

is the number of people who intuit that \(P\); by stipulation, \(P\) will always be the proposition found intuitive by the majority.

\(a\) :

is the number of people who find \({\sim }P\) intuitive. I will assume no one is agnostic.

\(r\) :

is the on-average reliability of the relevant body of intuitions.

To calculate \(\Pr (P{\vert }E)\) via Bayes’ theorem, we need to calculate \(\Pr (E{\vert }P)\) and \(\Pr (E{\vert } {\sim }P)\). \(\Pr (E{\vert }P)\) is the probability that \(f\) people find \(P\) intuitive and \(a\) people find it counter-intuitive, given that \(P\) is true. Given that \(P\) is true, \(f\) people can intuit that \(P\) only if they get things right, and \(a\) can intuit that \({\sim }P\) only if they get things wrong. For each intuitor, the probability of getting things right is \(r\) and the probability of getting things wrong is \(1-r\). Because we are assuming independence, the probability that \(f\) people get things right and \(a\) get things wrong is \(r^{f}(1-r)^{a}\), times the number of combinations of people that can result in this split of intuitions. Note that this last factor will appear in \(\hbox {Pr}(E{\vert } {\sim }P)\), and so can be cancelled out of the equation.

To find \(\Pr (E{\vert } {\sim }P)\), we instead multiply \((1-r)^{f}\) times \(r^{a}\), again times the number of possible combinations (which cancels out).

Putting this all together, we get:

$$\begin{aligned} \Pr \left( {P|E} \right) =\frac{\Pr \left( P \right) r^{f}(1-r)^{a}}{\Pr \left( P \right) r^{f}(1-r)^{a}+\Pr \left( {\sim }P \right) r^{a}(1-r)^{f}} \end{aligned}$$
(2)

We can factor \(r^{a}\) and (\(1-r)^{a}\) out of both the numerator and denominator (since we are assuming \(f > a)\), and get the following:

$$\begin{aligned} \Pr \left( {P |E} \right) =\frac{\Pr \left( P \right) r^{f-a}}{\Pr \left( P \right) r^{f-a}+\Pr \left( {\sim }P \right) (1-r)^{f-a}} \end{aligned}$$
(3)

Here we see that only the difference between the number intuiting that \(P\) and that \(\sim P\), but not the size of either group, matters.

Appendix B: Implementing the narrowing down approach

This section will discuss how to implement the narrowing down approach, assuming that there is no possible causal dependence between intuitions.

For any proposition \(P\) and intuitions \(I_{1}\ldots I_{n}\), which are intuitions that \(P\) or intuitions that \({\sim }P\), by Bayes’ theorem:

$$\begin{aligned} \Pr \left( {P|I_1 \ldots I_n } \right) =\frac{\Pr \left( P \right) \Pr \left( {I_1 \ldots I_n |P} \right) }{\Pr \left( P \right) \Pr \left( {I_1 \ldots I_n |P} \right) +\Pr \left( {\sim }P \right) \Pr (I_1 \ldots I_n |{\sim }P)} \end{aligned}$$
(4)

\(\Pr (P)\) and \(\Pr ({\sim }P\)) are the prior probabilities of \(P\) and \({\sim }P\), and I’ll assume that these are known (and rational). So what needs to be calculated are \(\Pr (I_{1}{\ldots }I_{n}{\vert }P)\) and \(\Pr (I_{1}{\ldots }I_{n}{\vert } {\sim }P)\).

Let’s start with \(Pr(I_{1}\ldots I_{n}{\vert } P)\). Since \(I_{1} {\ldots }I_{n}\) is a conjunction of intuitions, \(\Pr (I_{1}{\ldots }I_{n}{\vert } P)\) is the product of the probability of each intuition in the set given \(P\); because these intuitions are not independent, the probability of each intuition in the set must be calculated given all intuitions before it (by the multiplication rule). Formally,

$$ \begin{aligned} \Pr \left( {I_1 \ldots I_n |P} \right) =\mathop \prod \limits _{i=1}^n \Pr (I_i |P \& \left[ {I_1 \ldots I_{i-1} } \right] ) \end{aligned}$$
(5)

\(I_{1}{\ldots }I _{i-1}\) is in brackets because, when \(i=1\), no term should be substituted in that place.

How do we calculate \( \Pr (I_{i}{\vert }P \& I_{1} {\ldots }I_{i-1})\) for each intuition in the set? We don’t know precisely what reliability value each intuition in the group has. However, we do know all the possible reliability values each could have. If the possible reliability values are \(r_{1} {\ldots } r_{m}\), the probability of any given intuition occurring, given \(P\), is the probability of it occurring and \(r_{1}\) obtaining, or it occurring and \(r_{2}\) obtaining, or it occurring and \(r_{3}\) obtaining, and so forth. Formally,

$$ \begin{aligned} \Pr \left( {I_i |P \& \left[ {I_1 \ldots I_{i-1} } \right] } \right) \approx \mathop \sum \limits _{j=1}^{j=m} \Pr (r_j |P \& \left[ {I_1 \ldots I_{i-1} } \right] )\Pr (I_i |P \& r_j \& \left[ {I_1 \ldots I_{i-1} } \right] )\nonumber \\ \end{aligned}$$
(6)

It should be noted that reliability can take any value from 0 to 1, so there are uncountable possible reliability values and they cannot actually be listed as a series \(r_{1} {\ldots } r_{m}\). To be really precise, Eq. 6 should be an integral, rather than a summation. However, since in practice we want a numeric value for \( \Pr (I_{i}{\vert }P \& I_{1}{\ldots }I_{i-1})\), we must use some method for approximating the value of this integral. Treating \(r\) as a discrete variable and summing the results for each possible value (as in Eq. 6) is one such method. To generate the values discussed in the main text, I treated \(r\) as a discrete variable that could take one of 10,000 values between 0 and 1 (this required a computer to do the math for me). This should give us a very close approximation of the value of \( \Pr (I_{i}{\vert }P \& I_{1}{\ldots }I_{i-1})\).

We now have two terms to calculate: \( \Pr (I_{i}{\vert }P \& r_{j} \& I_{1}{\ldots }I_{i-1})\) and \( \Pr (r_{j}{\vert }P \& I_{1}{\ldots }I_{i-1})\). We can simplify \( \Pr (I_{i}{\vert }P \& r_{j} \& I_{1}{\ldots }I_{i-1})\). We know the content of the intuition \(I_{i}\), and are given a reliability value and the truth of the relevant \(P\). With these givens, \(I_{1}{\ldots }I_{i-1}\) make no difference to the probability of \(I_{i}\). So

$$ \begin{aligned} \Pr \left( {I_i |P \& r_j \& \left[ {I_1 \ldots I_{i-1} } \right] } \right) =\Pr \left( {I_i{\vert } P \& r_j } \right) \end{aligned}$$
(7)

We’ll discuss how to calculate this at the end of the section (following Eq. 19).

How do we calculate \( \Pr (r_{j}{\vert }P \& I_{1}{\ldots }I_{i-1})\)? By Bayes’ theorem:

$$ \begin{aligned} \mathrm{Pr}(r_j |P \& I_1 \ldots I_{i-1} )=\frac{\mathrm{Pr}(r_j )\mathrm{Pr}(P \& I_1 \ldots I_{i-1} |r_j )}{\mathrm{Pr}(P \& I_1 \ldots I_{i-1} )} \end{aligned}$$
(8)

\(\Pr (r_{j})\) is the prior probability of \(r_{j}\), and we’ll assume it is known and rational. (In my calculations in the main text, I assumed that the prior probabilities of reliabilities are normally distributed, and used a programming library by L’Ecuyer (2012) to calculate these priors) By the multiplication rule:

$$ \begin{aligned} \mathrm{Pr}(P \& I_1 \ldots I_{i-1} |r_j )=\Pr (P|r_j )\Pr (I_1 \ldots I_{i-1} |P \& r_j ) \end{aligned}$$
(9)

Being given \(r_{j}\) doesn’t by itself affect \(\Pr (P)\), so \(\Pr (P{\vert }r_{j}) = \Pr (P)\). Substituting this into Eq. 9, and Eq. 9 into Eq. 8, we get:

$$ \begin{aligned} \mathrm{Pr}(r_j |P \& I_1 \ldots I_{i-1} )=\frac{\mathrm{Pr}(r_j )\Pr (P)\Pr (I_1 \ldots I_{i-1} |P \& r_j )}{\mathrm{Pr}(P \& I_1 \ldots I_{i-1} )} \end{aligned}$$
(10)

For a moment, I want to focus on the denominator here. By the multiplication rule:

$$ \begin{aligned} \mathrm{Pr}\left( {P \& I_1 \ldots I_{i-1} } \right) =\Pr \left( P \right) \Pr (I_1 \ldots I_{i-1} |P) \end{aligned}$$
(11)

We can substitute this into Eq. 10, and cancel the \(\Pr (P)\) in the numerator and denominator, getting:

$$ \begin{aligned} \mathrm{Pr}(r_j |P \& I_1 \ldots I_{i-1} )=\frac{\mathrm{Pr}(r_j )\Pr (I_1 \ldots I_{i-1} |P \& r_j )}{\Pr (I_1 \ldots I_{i-1} |P)} \end{aligned}$$
(12)

Some of these terms will cancel out with other terms from elsewhere in our equation. So, rather than simplifying or expanding Eq. 12 further, let’s see how what we’ve done fits into what we started out trying to do.

Remember, everything we have done so far has been to calculate \(\Pr (I_{1}{\ldots }I_{n}{\vert }P)\), which is a term in the numerator of Bayes’ theorem from Eq. 4. As we saw in Eq. 5, \(\Pr (I_{1}{\ldots }I_{n}{\vert } P)\) is equal to the product of the probability of each intuition in \(I_{1}{\ldots }I_{n}\), given \(P\) (since these intuitions are dependent on each other, each must be calculated given all the prior intuitions). If we have \(n\) intuitions, we can also think of think of this as the product of the probabilities of intuitions 1 through \(n\)-1, times the probability of intuition \(n\):

$$ \begin{aligned} \Pr \left( {I_1 \ldots I_n |P} \right) =\Pr \left( {I_1 \ldots I_{n-1} |P} \right) \Pr (I_n |P \& I_1 \ldots I_{n-1} ) \end{aligned}$$
(13)

This will allow us to cancel some terms, as we’ll see in a moment. For now, let’s simplify Eq. 13. Equation 6 shows us how to calculate the probability of a single intuition, and we can substitute it for \( \Pr (I_{n}{\vert }P \& I_{1}{\ldots }I_{n-1})\); Eq. 7 shows us how to simplify one of the terms in Eq. 6, giving us:

$$ \begin{aligned} \Pr \left( {I_1 \ldots I_n |P} \right) \approx \Pr \left( {I_1 \ldots I_{n-1} |P} \right) \mathop \sum \limits _{j=1}^{j=m} \left[ {\Pr (r_j |P \& I_1 \ldots I_{n-1} )\Pr (I_n |P \& r_j )} \right] \nonumber \\ \end{aligned}$$
(14)

Equation 12 tells us how to expand \( \Pr (r_{j}{\vert }P \& I_{1}{\ldots }I_{n-1})\); substituting that into Eq. 14, we get:

$$ \begin{aligned}&\Pr \left( {I_1 \ldots I_n |P} \right) \approx \nonumber \\&\qquad \ \Pr \left( {I_1 \ldots I_{n-1} |P} \right) \mathop \sum \limits _{j=1}^{j=m} \left[ {\frac{\mathrm{Pr}(r_j )\Pr (I_1 \ldots I_{n-1} |P \& r_j )\Pr (I_n |P \& r_j )}{\mathrm{Pr}(I_1 \ldots I_{n-1} |P)}} \right] \quad \end{aligned}$$
(15)

Let’s focus on the summation in Eq. 15. It sums the value of the given fraction for all possible \(r\) values. However, since the probability term in the denominator does not take an \(r\) value as given, it does not change as the \(r\) values considered vary. So every value in this summation has \(\Pr (I_{1}{\ldots }I_{n-1}{\vert }P)\) as a common denominator. Thus, rather than considering this as the summation of values of a fraction, we can consider it as a fraction with a summation in its numerator:

$$ \begin{aligned}&\mathop \sum \limits _{j=1}^{j=m} \frac{\Pr (r_j )\Pr (I_1 \ldots I_{n-1} |P \& r_j )\Pr (I_n |P \& r_j )}{\Pr (I_1 \ldots I_{n-1} |P)}\nonumber \\&\quad =\frac{\mathop \sum \nolimits _{j=1}^{j=m} \left[ {\Pr \left( {r_j } \right) \Pr \left( {I_1 \ldots I_{n-1} |P \& r_j } \right) \Pr (I_n |P \& r_j )} \right] }{\Pr (I_1 \ldots I_{n-1} |P)} \end{aligned}$$
(16)

Substituting into Eq. 15, we get:

$$ \begin{aligned} \Pr \left( {I_1 \ldots I_n |P} \right) \approx \Pr \left( {I_1 \ldots I_{n-1} |P} \right) \frac{\mathop \sum \nolimits _{j=1}^{j=m} \left[ {\Pr \left( {r_j } \right) \Pr \left( {I_1 \ldots I_{n-1} |P \& r_j } \right) \Pr (I_n |P \& r_j )} \right] }{\Pr (I_1 \ldots I_{n-1} |P)}\nonumber \\ \end{aligned}$$
(17)

The denominator of the fraction cancels with the previous term in the equation, leaving us with:

$$ \begin{aligned} \Pr \left( {I_1 \ldots I_n |P} \right) \approx \mathop \sum \limits _{j=1}^{j=m} \left[ {\Pr (r_j )\Pr (I_1 \ldots I_{n-1} |P \& r_j )\Pr (I_n |P \& r_j )} \right] \end{aligned}$$
(18)

This can be slightly simplified: the probability of \(I_{1}{\ldots }I_{n-1 }\) (given \(P\) and some \(r\)) times the probability of \(I_{n}\) (given \(P\) and the same \(r\)) is just the probability of \(I_{1}{\ldots }I_{n}\) (given \(P\) and that \(r\)):

$$ \begin{aligned} \Pr \left( {I_{1} \ldots I_{n} |P} \right) \approx \mathop \sum \limits _{{j = 1}}^{{j = m}} \left[ {\Pr (r_{j} ) {\text {Pr}}(I_{1} \ldots I_{n} |P \& r_{j} )} \right] \end{aligned}$$
(19)

To calculate this, we need to know how to calculate \(\Pr (I_{1}...I_{n})\) given the truth of \(P\) and some reliability value. This is now relatively easy, because we can now act as if the intuitions are independent of each other. When implementing the narrowing down approach by itself, we are assuming that there is no causal dependence between intuitions. The only dependence relation we are concerned with is informational dependence—we learn something about how reliable intuitions are by observing what pattern of intuitions we find on this topic. If we are given a reliability value, however, then this sort of dependence is no longer a factor. Thus, given \(P\) and some reliability value \(r_{j}\), we can calculate \(\Pr (I_{1}{\ldots }I_n)\) by simply multiplying the probability of each intuition in the set together. The probability of each intuition depends on its content. Given \(P\), intuitions that \(P\) are correct, and so have a probability of occurring equal to the given reliability value \(r_{j}\). Intuitions that \({\sim }P\) have a probability of \((1-r_{j})\).

So, if there are \(f\) intuitions for \(P\) and \(a\) intuitions that \({\sim }P\):Footnote 22

$$ \begin{aligned} \Pr \left( {I_1 \ldots I_n |P \& r_j } \right) =r_j^f (1-r_j )^{a} \end{aligned}$$
(20)

Substituting into Eq. 19:

$$\begin{aligned} \Pr \left( {I_1 \ldots I_n |P} \right) \approx \mathop \sum \limits _{j=1}^{j=m} \left[ {\Pr (r_j )r_j^f (1-r_j )^{a}} \right] \end{aligned}$$
(21)

If we look back at Eq. 4, we see that we now only have to calculate the denominator of Bayes’ theorem. The term we need to calculate is \(\Pr (I_{1}{\ldots }I_{n}{\vert } {\sim }P\)). Almost everything I have just said applies to calculating this term; the only difference is how we calculate the probability of each intuition given some \(r\) value and \({\sim }P\). For all the intuitions that \(P\), the probability of them occurring given some \(r \)value and \({\sim }P\) is \(1-r\); for intuitions that \({\sim }P\), their probability given some \(r\) and \({\sim }P\) is \(r\). So

$$\begin{aligned} \Pr \left( {I_1 \ldots I_n |{\sim }P} \right) \approx \mathop \sum \limits _{j=1}^{j=m} \left[ {\Pr (r_j )(1-r_j )^{f}r_j^a } \right] \end{aligned}$$
(22)

Plugging all of this in to Eq. 4, we get:

$$\begin{aligned}&\Pr \left( {P|I_1 \ldots I_n } \right) \approx \nonumber \\&\quad \frac{\Pr \left( P \right) \mathop \sum \nolimits _{j=1}^{j=m} \left[ {\Pr (r_j )r_j^f (1-r_j )^{a}} \right] }{\Pr \left( P \right) \mathop \sum \nolimits _{j=1}^{j=m} \left[ {\Pr (r_j )r_j^f (1-r_j )^{a}} \right] +\Pr \left( {\sim }P \right) \mathop \sum \nolimits _{j=1}^{j=m} \left[ {\Pr (r_j )(1-r_j )^{f}r_j^a } \right] }\quad \quad \end{aligned}$$
(23)

Appendix C: Causal dependence

In this section, we’ll discuss how to deal with possible causal dependence, assuming we have a given reliability value for our set of intuitions.

The calculations for the symmetric and asymmetric dependence approaches are derived almost exactly as above. Instead of narrowing down a reliability value, we consider a range of degrees of dependence; we’ll use \(d\) to stand for these. This doesn’t make a big difference to the derivations: for most of the discussion in “Appendix B”, the actual meaning of the \(r\) term—that it represents a reliability value—didn’t make a difference to the algebra being done. We can simply swap \(d\) for \(r\) for most of the derivation without issue, as long as we then add a reliability value as a given.

Where things diverge from the discussion in “Appendix B” substantially is in the final steps in the derivation: Eqs. 20 through 23. This is where we finally calculate the probability of an intuition, or of a series of intuitions. Modifying Eq. 19 so that it is relevant to causal dependence, we get:

$$ \begin{aligned} \Pr \left( {I_1 \ldots I_n |P \& r} \right) \approx \mathop \sum \limits _{j=1}^{j=m} \left[ {\Pr (d_j )\Pr (I_1 \ldots I_n |P \& r \& d_j )} \right] \end{aligned}$$
(24)

Note that \(r\) is a given on the left side of the equation. However, \(r\) need not be a given in \(\Pr (d)\), as \(\Pr (d{\vert }r) = Pr(d)\).

The calculation of \( \Pr (I_{1}{\ldots }I_{n}{\vert }P \& r \& d)\) varies depending on what notion of dependence we use. Consider first symmetric dependence. For intuitions that \(P\), the probability that they obtain is equal to the probability that: one intuits correctly and is not influenced by previous intuitions or one has been influenced by previous intuitions that \(P\) but not by previous intuitions that \({\sim }P\). The probability of not being influenced by \(n\) intuitions is (\(1-d)^{n}\), and the probability of being influenced by at least one out of \(n\) intuitions is (\(1-(1-d)^{n})\). Thus, the probability of intuiting that \(P\), given \(r\), \(P\), \(d\), and previous intuitions, is

$$ \begin{aligned}&\Pr \left( {intuiting\;that\;P|P \& d \& r \& previous\;intuitions} \right) \nonumber \\&\quad =r(1-d)^{all\;prior\;intuitions}\nonumber \\&\quad +(1-(1-d)^{prior\;intuitions\;that\;P}) (1-d)^{prior\;intuitions\;that\;\sim P} \end{aligned}$$
(25)

The probability of intuiting that \({\sim }P\), given \(r\), \(P\), \(d\), and previous intuitions is:

$$ \begin{aligned}&\Pr \left( {intuiting\;that\;{\sim }P|P \& d \& r \& previous\;intuitions} \right) \nonumber \\&\quad =(1-r)\left( {1-d} \right) ^{all\;prior\;intuitions}\nonumber \\&\quad +(\left( {1-d} \right) ^{prior\;intuitions\;that\;P})(1-\left( {1-d} \right) ^{prior\;intuitions\;that\;\sim P}) \end{aligned}$$
(26)

The order we consider intuitions in doesn’t matter, so to simplify our calculations, we can consider all intuitions that \(P\) first, and then all intuitions that \({\sim }P\). For intuitions that \(P\), this allows us to ignore the possibility of influence by intuitions that \({\sim }P\).

To calculate \( \Pr (I_{1}{\ldots }I_{n}{\vert }P \& r \& d_{j})\) let \(f\) be the number of intuitions that \(P\) and \(a\) the number that \({\sim }P\):

$$ \begin{aligned}&\Pr \left( {I_1 \ldots I_n |P \& r \& d_j } \right) \nonumber \\&\quad =\mathop \prod \limits _{i=1}^{i=f} \left[ {r\left( {1-d_j } \right) ^{i-1}+\left( {1-\left( {1-d} \right) ^{i-1}} \right) } \right] \nonumber \\&\quad \mathop \prod \limits _{i=1}^{i=a} \left[ (1-r)\left( {1-d_j } \right) ^{f+i-1}+(1-d)^{f}\left( {1-\left( {1-d} \right) ^{i-1}} \right) \right] \end{aligned}$$
(27)

This can be substituted into Eq. 24.

The calculation for \( \Pr (I_{1}{\ldots }I_{n}{\vert } {\sim }P \& r)\) can be derived from what I’ve just said, with minor changes.

For asymmetric dependence, one needs to make changes to Eqs. 25 and 26. On this approach, an intuition can only be influenced by the majority view. The probability that an intuition was or was not influenced is simply based on the degree of asymmetric dependence \(d\) and is independent of the size of the majority. Given \(P\), for intuitions that agree with the majority, their probability given some degree of asymmetric dependence is the chance that they were correct and were not influenced, or that they were influenced by the majority. For intuitions that disagree with the majority (given \(P)\), their probability is the probability that they were incorrect and were not influenced.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Talbot, B. Why so negative? Evidence aggregation and armchair philosophy. Synthese 191, 3865–3896 (2014). https://doi.org/10.1007/s11229-014-0509-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-014-0509-z

Keywords

Navigation