Representing credal imprecision: from sets of measures to hierarchical Bayesian models

  • Daniel LassiterEmail author


The basic Bayesian model of credence states, where each individual’s belief state is represented by a single probability measure, has been criticized as psychologically implausible, unable to represent the intuitive distinction between precise and imprecise probabilities, and normatively unjustifiable due to a need to adopt arbitrary, unmotivated priors. These arguments are often used to motivate a model on which imprecise credal states are represented by sets of probability measures. I connect this debate with recent work in Bayesian cognitive science, where probabilistic models are typically provided with explicit hierarchical structure. Hierarchical Bayesian models are immune to many classic arguments against single-measure models. They represent grades of imprecision in probability assignments automatically, have strong psychological motivation, and can be normatively justified even when certain arbitrary decisions are required. In addition, hierarchical models show much more plausible learning behavior than flat representations in terms of sets of measures, which—on standard assumptions about update—rule out simple cases of learning from a starting point of total ignorance.


Bayesian epistemology Bayesian cognitive science Probability Credal imprecision Philosophy of cognitive science Hierarchical Bayesian models Bayesian networks 



  1. Al-Najjar, N. I., & Weinstein, J. (2009). The ambiguity aversion literature: A critical assessment. Economics & Philosophy, 25(3), 249–284.CrossRefGoogle Scholar
  2. Bishop, C. M. (2013). Model-based machine learning. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 371(1984), 20120222.CrossRefGoogle Scholar
  3. Bradley, S. (2014). Imprecise probabilities. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2014 ed.).Google Scholar
  4. Danks, D. (2014). Unifying the mind: Cognitive representations as graphical models. Cambridge: MIT Press.CrossRefGoogle Scholar
  5. de Finetti, B. (1977). Probabilities of probabilities: A real problem or a misunderstanding? In A. Aykac & C. Brumet (Eds.), New developments in the applications of Bayesian methods. Amsterdam: North-Holland.Google Scholar
  6. Elga, A. (2010). Subjective probabilities should be sharp. Philosophers’ Imprint, 10(5), 1–11.Google Scholar
  7. Ellsberg, D. (1961). Risk, ambiguity, and the savage axioms. The Quarterly Journal of Economics, 75, 643–669.CrossRefGoogle Scholar
  8. Gerstenberg, T., & Goodman, N. D. (2012). Ping pong in church: Productive use of concepts in human probabilistic inference. In Proceedings of the 34th annual conference of the Cognitive Science Society (pp. 1590–1595).Google Scholar
  9. Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond heuristics and biases. European Review of Social Psychology, 2(1), 83–115.CrossRefGoogle Scholar
  10. Glymour, C. N. (2001). The mind’s arrows: Bayes nets and graphical causal models in psychology. Cambridge: MIT Press.CrossRefGoogle Scholar
  11. Goodman, N., Mansinghka, V., Roy, D., Bonawitz, K., & Tenenbaum, J. (2008). Church: A language for generative models. Uncertainty in Artificial Intelligence, 22, 23.Google Scholar
  12. Goodman, N. D., & Lassiter, D. (2015). Probabilistic semantics and pragmatics: Uncertainty in language and thought. In S. Lappin & C. Fox (Eds.), Handbook of Contemporary Semantic Theory (2nd ed.). London: Wiley-Blackwell.Google Scholar
  13. Goodman, N. D., Tenenbaum, J. B., & The ProbMods Contributors. (2016). Probabilistic models of cognition. Retrieved February 14, 2019 from
  14. Griffiths, T., Tenenbaum, J. B., & Kemp, C. (2012). Bayesian inference. In R. G. Morrison (Ed.), The Oxford handbook of thinking and reasoning (pp. 22–35). Oxford: Oxford University Press.Google Scholar
  15. Griffiths, T. L., Kemp, C., & Tenenbaum, J. B. (2008). Bayesian models of cognition. In R. Sun (Ed.), Cambridge handbook of computational psychology (pp. 59–100). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  16. Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9), 767–773.CrossRefGoogle Scholar
  17. Grove, A. J., & Halpern, J. Y. (1998). Updating sets of probabilities. In Proceedings of the fourteenth conference on uncertainty in artificial intelligence (pp. 173–182). Morgan Kaufmann.Google Scholar
  18. Hájek, A., & Smithson, M. (2012). Rationality and indeterminate probabilities. Synthese, 187(1), 33–48.CrossRefGoogle Scholar
  19. Halpern, J. Y. (2003). Reasoning about uncertainty. Cambridge: MIT Press.Google Scholar
  20. Hoff, P. D. (2009). A first course in Bayesian statistical methods. Berlin: Springer.CrossRefGoogle Scholar
  21. Icard, T. (2017). From programs to causal models. In A. Cremers, T. van Gessel & F. Roelofsen (Eds.), Proceedings of the 21st Amsterdam colloquium (pp. 35–44).Google Scholar
  22. Jaynes, E. (2003). Probability theory: The logic of science. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  23. Jeffrey, R. (1983). Bayesianism with a human face. In J. Earman (Ed.), Testing scientific theories (pp. 133–156). Minneapolis: University of Minnesota Press.Google Scholar
  24. Joyce, J. M. (2005). How probabilities reflect evidence. Philosophical Perspectives, 19(1), 153–178.CrossRefGoogle Scholar
  25. Joyce, J. M. (2010). A defense of imprecise credences in inference and decision making. Philosophical Perspectives, 24(1), 281–323.CrossRefGoogle Scholar
  26. Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  27. Keynes, J. M. (1921). A treatise on probability. New York: Macmillan.Google Scholar
  28. Koller, D., & Friedman, N. (2009). Probabilistic graphical models: Principles and techniques. Cambridge: MIT Press.Google Scholar
  29. Kolmogorov, A. (1933). Grundbegriffe der Wahrscheinlichkeitsrechnung. Berlin: Springer.CrossRefGoogle Scholar
  30. Körding, K. P., & Wolpert, D. M. (2004). Bayesian integration in sensorimotor learning. Nature, 427(6971), 244–247.CrossRefGoogle Scholar
  31. Levi, I. (1974). On indeterminate probabilities. The Journal of Philosophy, 71(13), 391–418.CrossRefGoogle Scholar
  32. Levi, I. (1985). Imprecision and indeterminacy in probability judgment. Philosophy of Science, 52(3), 390–409.CrossRefGoogle Scholar
  33. Macmillan, N., & Creelman, C. (2005). Detection theory: A user’s guide. London: Lawrence Erlbaum.Google Scholar
  34. Milch, B., Marthi, B., Russell, S., Sontag, D., Ong, D. L., & Kolobov, A. (2007). Blog: Probabilistic models with unknown objects. In L. Getoor & B. Taskar (Eds.), Introduction to statistical relational learning (pp. 373–398). Cambridge: MIT Press.Google Scholar
  35. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231.CrossRefGoogle Scholar
  36. Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. Los Altos: Morgan Kaufmann.Google Scholar
  37. Pearl, J. (2000). Causality: Models, reasoning and inference. Cambridge: Cambridge University Press.Google Scholar
  38. Pedersen, A. P., & Wheeler, G. (2014). Demystifying dilation. Erkenntnis, 79(6), 1305–1342.CrossRefGoogle Scholar
  39. Pfeffer, A. (2016). Practical probabilistic programming. New York: Manning Publications.Google Scholar
  40. Rinard, S. (2013). Against radical credal imprecision. Thought: A Journal of Philosophy, 2(2), 157–165.Google Scholar
  41. Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach. Englewood Cliffs: Prentice Hall.Google Scholar
  42. Seidenfeld, T., & Wasserman, L. (1993). Dilation for sets of probabilities. The Annals of Statistics, 21, 1139–1154.CrossRefGoogle Scholar
  43. Simeone, O. (2017). A brief introduction to machine learning for engineers. arXiv:1709.02840v1.
  44. Sloman, S. A. (2005). Causal models: How we think about the world and its alternatives. Oxford: OUP.CrossRefGoogle Scholar
  45. Spirtes, P., Glymour, C., & Scheines, R. (1993). Causation, prediction, and search. Cambridge: MIT Press.CrossRefGoogle Scholar
  46. Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.CrossRefGoogle Scholar
  47. Trommershäuser, J., Maloney, L. T., & Landy, M. S. (2008). Decision making, movement planning and statistical decision theory. Trends in Cognitive Sciences, 12(8), 291–297.CrossRefGoogle Scholar
  48. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4754), 1124–1131.CrossRefGoogle Scholar
  49. van Fraassen, B. C. (1990). Figures in a probability landscape. In J. Dunn & A. Gupta (Eds.), Truth or consequences (pp. 345–356). Berlin: Springer.CrossRefGoogle Scholar
  50. van Fraassen, B. C. (2006). Vague expectation value loss. Philosophical Studies, 127(3), 483–491.CrossRefGoogle Scholar
  51. Vul, E., Goodman, N., Griffiths, T., & Tenenbaum, J. (2014). One and done? Optimal decisions from very few samples. Cognitive Science, 38(4), 599–637.CrossRefGoogle Scholar
  52. Walley, P. (1996). Inferences from multinomial data: Learning about a bag of marbles. Journal of the Royal Statistical Society, Series B (Methodological), 58, 3–57.CrossRefGoogle Scholar
  53. White, R. (2010). Evidential symmetry and mushy credence (pp. 161–186). Oxford: Oxford University Press.Google Scholar
  54. Wilson, T. D. (2004). Strangers to ourselves: Discovering the adaptive unconscious. Cambridge: Harvard University Press.CrossRefGoogle Scholar
  55. Wolpert, D. H. (1996). The lack of a priori distinctions between learning algorithms. Neural computation, 8(7), 1341–1390.CrossRefGoogle Scholar
  56. Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford: Oxford University Press.Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Department of LinguisticsStanford UniversityStanfordUSA

Personalised recommendations