Advertisement

Synthese

, Volume 161, Issue 1, pp 89–118 | Cite as

Assessing theories, Bayes style

  • Franz Huber
Original Paper

Abstract

The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory.

Keywords

Theory evaluation Confirmation Probability 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bar-Hillel, Y. (1952). Semantic information and its measures. In Transactions of the tenth conference on cybernetics (pp. 33–48). New York: Josiah Macy, Jr. Foundation. (Reprinted in Bar-Hillel (1964), 298-310.)Google Scholar
  2. Bar-Hillel, Y. (1955). An examination of information theory. Philosophy of Science, 22, 86–105. (Reprinted in Bar-Hillel (1964), 275–297.)Google Scholar
  3. Bar-Hillel Y. (1964). Language and information Selected essays on their theory and application. Reading, MA: Addison-WesleyGoogle Scholar
  4. Bar-Hillel Y., & Carnap R. (1953). Semantic information. British Journal for the Philosophy of Science, 4:147–157CrossRefGoogle Scholar
  5. Carnap R. (1952). The continuum of inductive methods. Chicago: University of Chicago Press.Google Scholar
  6. Carnap R. (1962). Logical foundations of probability (2nd ed). University of Chicago Press, ChicagoGoogle Scholar
  7. Carnap, R., & Bar-Hillel, Y. (1952). An outline of a theory of semantic information. Technical Report No. 247 of the Research Laboratory of Electronics, MIT. (Reprinted in Bar-Hillel (1964), 221–274.)Google Scholar
  8. Christensen D. (1999). Measuring confirmation. Journal of Philosophy, 96: 437–461CrossRefGoogle Scholar
  9. Earman J. (1992). Bayes or bust? A critical examination of Bayesian confirmation theory, MA: MIT Press. CambridgeGoogle Scholar
  10. Fitelson B. (1999). The plurality of Bayesian measures of confirmation and the problem of measure sensitivity. Philosophy of Science, 66: S362–S378CrossRefGoogle Scholar
  11. Fitelson B. (2001). Studies in Bayesian confirmation theory. University of Wisconsin-Madison, Madison, WIGoogle Scholar
  12. Flach P. A. (2000). Logical characterisations of inductive learning. In D. M. Gabbay, R. Kruse (Eds.), Abductive reasoning and learning, (pp. 155–196). Dordrecht: Kluwer Academic Publishers.Google Scholar
  13. Gaifman H., Snir M. (1982). Probabilities over rich languages, testing, and randomness. Journal of Symbolic Logic, 47: 495–548CrossRefGoogle Scholar
  14. Hempel C.G. (1943). A purely syntactical definition of confirmation. Journal of Symbolic Logic 8: 122–143CrossRefGoogle Scholar
  15. Hempel C.G. (1945). Studies in the logic of confirmation. Mind, 54, 1–26, 97–121. (Reprinted in Hepel(1965), 3–51.)Google Scholar
  16. Hempel, C. G. (1960). Inductive inconsistencies. Synthese, 12, 439–469. (Reprinted in Hempel(1965), 53–79.)Google Scholar
  17. Hempel, C. G. (1962). Deductive-nomological vs. statistical explanation. In H. Feigl & G. Maxwell (Eds.), Minnesota studies in the philosophy of science (vol. 3., pp. 98–169). Minneapolis: University of Minnesota Press.Google Scholar
  18. Hempel C.G. (1965). Aspects of scientific explanation and other essays in the philosophy of science. The Free Press, New YorkGoogle Scholar
  19. Hempel C.G., Oppenheim P. (1945). A definition of “degree of confirmation”. Philosophy of Science 12: 98–115CrossRefGoogle Scholar
  20. Hempel, C. G., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15, 135–175. (Reprinted in Hempel(1965), 245–295.)Google Scholar
  21. Hendricks V.F. (2006). Mainstream and formal epistemology. Cambridge University Press, CambridgeGoogle Scholar
  22. Hintikka, J., & Pietarinen, J. (1966), Semantic information and inductive logic. In J. Hintikka & P. Suppes (Eds.), Aspects of inductive logic. Amsterdam: North-Holland.Google Scholar
  23. Huber, F. (2004). Assessing theories. The problem of a quantitative theory of confirmation. PhD Dissertation. Erfurt: University of Erfurt.Google Scholar
  24. Huber F. (2005). What is the point of confirmation?. Philosophy of Science 72: 1146–1159CrossRefGoogle Scholar
  25. Huber, F. (2007a). The logic of theory assessment. Journal of Philosophical Logic.Google Scholar
  26. Huber, F. (2007b). The plausibility-informativeness theory. In V. F. Hendricks & D. Pritchard (Eds.), New waves in epistemology. Aldershot: Ashgate.Google Scholar
  27. Joyce J.M. (1999). The foundations of causal decision theory. Cambridge University Press, CambridgeGoogle Scholar
  28. Kelly K.T. (1996). The logic of reliable inquiry. Oxford University Press, OxfordGoogle Scholar
  29. Kelly K.T. (1999). Iterated belief revision, reliability, and inductive amnesia. Erkenntnis, 50:11–58CrossRefGoogle Scholar
  30. Levi I. (1961). Decision theory and confirmation. Journal of Philosophy 58:614–625CrossRefGoogle Scholar
  31. Levi I. (1963). Corroboration and rules of acceptance. British Journal for the Philosophy of Science, 13:307–313CrossRefGoogle Scholar
  32. Levi I. (1967). Gambling with truth. An essay on induction and the aims of science. Routledge, LondonGoogle Scholar
  33. Levi I. (1986). Probabilistic pettifoggery. Erkenntnis, 25: 133–140CrossRefGoogle Scholar
  34. Lipton P. (2004). Inference to the best explanation 2nd ed. Routledge, LondonGoogle Scholar
  35. Milne P. (2000). Is there a logic of confirmation transfer? Erkenntnis, 53: 309–335CrossRefGoogle Scholar
  36. Spohn, W. (1988). Ordinal conditional functions: A dynamic theory of epistemic states. In W. L. Harper & B. Skyrms (Eds.), Causation in decision, belief change, and statistics II (pp. 105–134). Dordrecht: Kluwer.Google Scholar
  37. Spohn, W. (1990). A general non-probabilistic theory of inductive reasoning. In R. D. Shachter et al. (Eds.), Uncertainty in artificial intelligence 4 (pp. 149–158). Amsterdam: North-Holland.Google Scholar
  38. van Fraassen, B. C. (1983). Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press.Google Scholar
  39. Zwirn D., Zwirn H.P. (1996). Metaconfirmation. Theory and Decision, 41:195–228CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, Inc. 2007

Authors and Affiliations

  1. 1.California Institute of TechnologyPasadenaUSA

Personalised recommendations