There are various possible ways of articulating what Bayesian epistemology is and how it relates to other branches of formal and mainstream epistemology. Following the steps of Ramsey, Richard Jeffrey outlines in his article “Probable Knowledge” a possible way of constructing an epistemology grounded on Bayesian theory. While knowledge is a central notion in traditional epistemology (and in various branches of formal epistemology) Jeffrey suggests an epistemology where knowledge does not have the importance generally attributed to it. The idea is “[…] to try to make the concept of belief do the work that philosophers have generally assigned to the grander concept” (knowledge). Moreover the notion of belief is pragmatically analyzed along the lines proposed by Ramsey: “the kind of measurement of belief with which probability is concerned is …. a measurement of belief qua basis of action”. The result of this move is to conceive the logic of partial belief as a branch of decision theory. So, the first two essays in this section are also quite relevant for the section of decision theory presented below (Ramsey’s essay contains the first axiomatic presentation of decision theory). Both Jeffrey and Ramsey present the foundations of an epistemology which is deeply intertwined with a theory of action. This move has a behaviorist pedigree but perhaps the behavioral inspiration is not an essential ingredient of an interpretation of the formal theory that thus arises.
KeywordsDecision Theory Acceptance Rule Traditional Epistemology Probable Knowledge Full Belief
Suggested Further Reading
- •.An excellent introduction to Ramsey’s philosophy in general and to the essay reprinted here in particular can be found in the corresponding chapters of: The Philosophy of F.P. Ramsey, by Nils-Eric Sahlin, Cambridge University Press, 2008. The classical introduction to Richard Jeffrey’s decision theory is his: The Logic of Decision, University Of Chicago Press: 2nd edition (July 15, 1990). A detailed articulation of radical probabilism can be found in Probability and the Art of Judgment, Cambridge Studies in Probability, Induction and Decision Theory (Mar. 27, 1992). The theory of probability cores presented in van Fraassen’s article has been slightly modified and extended in a paper by Horacio Arlo-Costa and Rohit Parikh: “Conditional Probability and Defeasible Inference,” Journal of Philosophical Logic 34, 97-119, 2005. The best axiomatic presentation of primitive conditional probability is given by Lester E. Dubins in his article Finitely Additive Conditional Probabilities, Conglomerability and Disintegrations, The Annals of Probability, 3(1):89–99, 1975. Teddy Seidenfeld wrote an accessible note presenting recent results in this area in: Remarks on the theory of conditional probability: Some issues of finite versus countable additivity, Probability Theory, V.F. Hendricks et al. (eds.) 2001, pp. 167-178. Alan Hájek articulated a philosophical defense of the use of primitive conditional probability in: What Conditional Probability Could Not Be, Synthese, Vol. 137, No. 3, Dec., 2003. Finally there is an interesting article by David Makinson linking conditional probability and central issues in belief change: Conditional probability in the light of qualitative belief change, to appear in a 2011 issue of the Journal of Philosophical Logic marking 25 years of AGM. References to other classical articles in this area by Karl Popper, Alfred Renyi and Bruno de Finetti appear in the aforementioned articles.
- •.Brian Skyrms has also contributed to the theory of higher order probability. One accessible article is: “Higher Order Degrees of Belief,” in D. H. Mellor (ed.), Prospects for Pragmatism. Cambridge: Cambridge University Press, 109–13. Isaac Levi has articulated his theory of indeterminate probabilities in various books and articles. One of the classical sources is: The Enterprise of Knowledge, MIT Press, Cambridge, 1983. More information about Levi’s version of decision theory under uncertainty appears in section 7 on Decision Theory below.Google Scholar
- •.There are two classical sources for the formulation of dynamic Dutch books. One is: Teller, P. (1973), “Conditionalization and Observation”, Synthese 26: 218-258. The other is: van Fraassen, Bas (1984), “Belief and the Will,” Journal of Philosophy 81: 235–256. The second piece introduces also a theory of second order probability that complements the writings of Skyrms and Gaifman. Van Fraassen introduces there the Reflection Principle. The original formulation of some of the puzzles discussed by Arntzenius and Seidenfeld is a brief piece by Adam Elga: “Self-Locating Belief and the Sleeping Beauty problem, Analysis, 60(2): 143-147, 2000. More detailed reference to the work by Carnap on induction and confirmation can be found in the bibliography of Maher’s paper. The so-called Raven’s Paradox appeared for the first time in a seminal article by Carl Hempel: “Studies in the Logic of Confirmation (I.),” Mind, New Series, Vol. 54, No. 213 (Jan., 1945), pp. 1-26. Branden Fitelson and James Hawthorne offer an alternative and interesting Bayesian account of the paradox in: “How Bayesian Confirmation Theory Handles the Paradox of the Ravens,” in E. Eels and J. Fetzer (eds.), The Place of Probability in Science, Chicago: Open Court. Further information about confirmation theory can be found in a classical book by John Earman: Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory, MIT Press, 1992. Another classical source is: Scientific Reasoning: The Bayesian Approach, by Colin Howson and Peter Urbach, Open Court; 3rd edition, 2005. A interesting book touching a cluster of issues recently discussed in this area like coherence and the use of Bayesian networks in epistemology is: Bayesian Epistemology by Luc Bovens and Stephan Hartmann, Oxford University Press,. 2004.
- •.Another important formal epistemological issue is investigated by Timothy Williamson in his paper, “Conditionalizing on Knowledge”, British Journal for the Philosophy of Science 49 (1), 1998: 89-121, which intends to integrate the theory of probability and probability kinematics, with other epistemological notions like the notion of knowledge. The theory of evidential probability that thus arises is based on two central ideas: (1) the evidential probability of a proposition is its probability conditional on the total evidence (or conditional on evidence propositions); (2) one’s total evidence is one’s total knowledge. The tools of epistemic logic are used in order to represent the relevant notion of knowledge.Google Scholar
- •.Jeffrey does not adopt (1) but according to his modified notion of updating once a proposition has evidential probability 1, it keeps it thereafter (monotony). This is a feature shared by Jeffrey’s updating and the classical notion of updating. Williamson does embrace (1) but develops a model of updating that abandons monotony. This seems a very promising strategy given the limited applicability of a cumulative model of growth of knowledge. Similarly motivated models (that are nevertheless formally quite different) have been proposed by Isaac Levi, Peter Gärdenfors. Gärdenfors’ model appears in his book Knowledge in Flux (see the corresponding reference in the bibliographical references of chapter 6). Levi presents his account in The Enterprise of Knowledge (the reference appears in the bibliographical section below). Both models appeal directly not only to qualitative belief but also to models of belief change (contraction and revision - see chapter 6).Google Scholar
- •.Philosophers of science have traditionally appealed to Bayesian theory in order to provide a Carnapian explication of the notoriously vague, elusive and paradox-prone notion of confirmation or partial justification in science. Patrick Maher revives in his article, “Probability Captures the Logic of Scientific Confirmation,” in Contemporary Debates in the Philosophy of Science, ed. Christopher Hitchcock, Blackwell, 69–93, the Carnapian program of inductive inference in order to provide one of these explications. In contrast Clark Glymour and Kevin Kelly argue in their article, “Why Probability Does Not Capture the Logic of Scientific Justification”, in Christopher Hitchcock, ed., Contemporary Debates in the Philosophy of Science, London: Blackwell, 2004, that Bayesian confirmation cannot deliver the right kind of account of the logic of scientific confirmation. One of the reasons for this skepticism is that they think that scientific justification should reflect how intrinsically difficult is to find the truth and how efficient one’s methods are at finding it. So, their skepticism arises because they think that Bayesian confirmation captures neither aspect of scientific justification. While deploying their arguments the two articles discuss the well-known paradox of confirmation first proposed by Hempel, Carnap’s research program on the philosophy of probability and induction and the possible application of learning theory in order to offer a non-Bayesian account of scientific justification. The article by Glymour and Kelly continues Glymour’s earlier critique of the applications of Bayesianism in philosophy of science (also reprinted here). This earlier piece contains the original versions of some influential and much-discussed conundra engendered by Bayesian confirmation (like the problem of Old Evidence).Google Scholar