Skip to main content
Log in

Bayesian reverse-engineering considered as a research strategy for cognitive science

  • Neuroscience and Its Philosophy
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Bayesian reverse-engineering is a research strategy for developing three-level explanations of behavior and cognition. Starting from a computational-level analysis of behavior and cognition as optimal probabilistic inference, Bayesian reverse-engineers apply numerous tweaks and heuristics to formulate testable hypotheses at the algorithmic and implementational levels. In so doing, they exploit recent technological advances in Bayesian artificial intelligence, machine learning, and statistics, but also consider established principles from cognitive psychology and neuroscience. Although these tweaks and heuristics are highly pragmatic in character and are often deployed unsystematically, Bayesian reverse-engineering avoids several important worries that have been raised about the explanatory credentials of Bayesian cognitive science: the worry that the lower levels of analysis are being ignored altogether; the challenge that the mathematical models being developed are unfalsifiable; and the charge that the terms ‘optimal’ and ‘rational’ have lost their customary normative force. But while Bayesian reverse-engineering is therefore a viable and productive research strategy, it is also no fool-proof recipe for explanatory success.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Although Marr’s account may have its detractors (see e.g. Anderson 2015), a detailed discussion of its virtues and vices is beyond the scope of the present article. Moreover, although several other accounts of explanation in cognitive science have been proposed (see e.g. Cummins 1983; Milkowski 2013a; Zednik 2011), none have been similarly influential, especially within the Bayesian context. The present discussion will therefore assume a basic familiarity with Marr’s account, and will take its plausibility for granted. Marr’s account of explanation in cognitive science will have served its current purpose if it helps illuminate the principles of Bayesian reverse-engineering.

  2. In hindsight, the normative force of this claim appears overstated: Marr could not have predicted the degree to which e.g. connectionist modeling and brain imaging techniques would galvanize bottom-up research strategies which begin by answering questions at the implementational and algorithmic levels. Nevertheless, Marr’s considerations might still be viewed as a statement of the unique virtues of working from the top down: Whereas bottom-up strategies face the daunting task of making sense of complex physical structures in the brain, top-down approaches allow researchers to view such physical structures in a particular way, as contributing to the production of a behavioral or cognitive phenomenon that is itself already quite well-understood. Of course, bottom-up strategies are likely to have virtues of their own. Thus, cognitive science can only benefit from a heterogeneous methodological landscape.

  3. There are other ways to formalize inference under uncertainty. Because few of these are as well-developed and as well-known as standard probability theory, however, they are less widely used in cognitive scientific research.

  4. Sometimes, Bayesian rational analysis is reduced to analyzing evolutionarily relevant environments or tasks. While this approach has been very successful in behavioral ecology (Davies et al. 2012), in cognitive science it is often difficult or impossible to know what the correct ‘natural’ environment or task may be (though a notable exception may be natural image statistics: Simoncelli 2003). However, rational analysis has also been used very fruitfully in artificial laboratory experiments without any appeal to evolution, and often with very little information about the natural environment (Anderson 1991a). As these applications are conceptually simpler, we will focus on such cases here. The particular example considered here, bar-categorization in a simplified laboratory environment, is representative of the ones being used in categorization studies that use artificial stimuli and define categories as probability distributions (see e.g. Ashby and Gott 1988; Fried and Holyoak 1984).

  5. As the name suggests, ideal observer models were originally used in investigations of visual perception (Geisler 1989). However, the term ‘ideal observer’ is now also used in contexts that involve other kinds of perception, and even action. We will follow along with this common practice.

  6. A cautionary note on the term ‘Bayesian’: There is a longstanding debate in philosophy and statistics about the meaning of probability (Hacking 1975). At one end of the debate, frequentists take probabilities to be the limits of relative frequencies that are objectively measurable by counting. At the other end, subjectivists (often also called ‘Bayesians’) take probabilities to be expressions of personal beliefs that have to fulfill certain rationality conditions. Although statisticians are becoming less dogmatic about this distinction (see e.g. Efron 2013; Kass 2011), some introductions to the Bayesian approach in cognitive science may lend the impression that it is a defining feature of this approach that probabilities are used to model participants’ subjective beliefs. Note, however, that nothing in the method of Bayesian rational analysis as it has been presented thus far is Bayesian in the subjectivist sense (In fact, the ideal observer in the bar-categorization example gives the objectively optimal response in the frequentist sense!). While in philosophy Bayesian epistemology is clearly subjectivist, in Bayesian cognitive science it is not. Probably in order to avoid this confusion, some authors who (according to the present nomenclature) invoke the method of Bayesian rational analysis seem to avoid the label ‘Bayesian’ for their approaches (see e.g. Geisler 1989; Kersten and Schrater 2002; Anderson 1991a; Oaksford and Chater 2001). That said, even in these research programs, Bayes’ theorem plays a central role, and tools from Bayesian statistics and machine learning are routinely invoked to analyze and understand the behavior of ideal observers. For this reason, they too should be subsumed under the Bayesian approach in cognitive science.

  7. Strictly speaking, Ernst and Banks (2002) do not use Bayesian arguments when they claim that visual-haptic cue combination is statistically optimal. They use neither priors nor cost functions. However, their maximum likelihood estimator can be given a Bayesian justification under a specific (improper) prior and various cost functions.

  8. Of course, demonstrating that Bayesian Fundamentalism is in fact a straw man requires more than mere words. Section 4, in which the strategy of Bayesian reverse-engineering is described in detail, shows that many proponents of the Bayesian approach do in fact formulate testable hypotheses not only at the computational level of analysis, but also at the algorithmic and implementational levels. On the Fundamentalist construal, this practice is difficult to accommodate.

  9. Some previous discussions of Bayesian Instrumentalism similarly acknowledge the need to go beyond the computational level of analysis. In particular, whereas Danks (2008) criticizes the explanatory value of Bayesian rational analysis insofar as it is confined to the computational level of analysis, he proposes a solution in which considerations of rationality and optimality are also deployed at lower levels. Similarly, upon Colombo and Hartmann’s (2015) critique of the explanatory force of (mathematical) unification at the computational level, they propose to consider “what sorts of constraints can Bayesian unification place on causal-mechanical explanation” (Colombo and Hartmann 2015, p. 15). The latter proposal in particular is very much in line with the reverse-engineering view outlined below. That said, whereas Colombo and Hartmann do well to identify constraints that are approximately equivalent to the push-down and unification heuristics outlined in Sect. 4, that section will show that the constraints that are in fact imposed on the lower levels are far more numerous, heterogeneous, and unprincipled than previous commentators appear to have recognized.

  10. Notably, Tenenbaum et al. (2011, p. 1279), Chater et al. (2011, p. 196), and Frank (2013, p. 417) each describe their own approach as an exercise in “reverse-engineering the mind”.

  11. Milkowski (2013b) provides an alternative account of reverse-engineering in cognitive science which is distinct, but not incompatible with the one sketched here.

  12. In fact, the idea of a decision criterion for solving probabilistic categorization tasks has inspired the development of several learning algorithms that are all inconsistent with the Bayesian Coding Hypothesis, because they do not depend on the representation of probability distributions (Dorfman and Biderman 1971; Kac 1969; Stüttgen et al. 2013; Thomas 1973).

  13. Notably, what “comes out” at the algorithmic level might itself also feed back on what “goes in” at the computational level. Specifically, the intention to later invoke the push-down heuristic may already influence the selection of the computational-level tweaks discussed in Sect. 2.3: Investigators might tweak an ideal observer in one way rather than another just because that way suggests, via the push-down heuristic, certain candidates at the algorithmic level. In particular, it seems very natural to use the added-limitations tweak together with the push-down heuristic because the limitations can be set up to map directly onto hypothesized mechanisms. In so doing, investigators change what the ideal observer does and simultaneously select a corresponding hypothesis about how the relevant mechanism works. Importantly, although the push-down heuristic thus establishes an intimate link between the computational and algorithmic levels, this does not mean that there exists a level “between the computational and the algorithmic” (Griffiths et al. 2015). On the questions-based interpretation of Marr’s framework outlined in Sect. 2, appeals to in-between levels are confusing: The computational level of analysis concerns what- and why-questions; the algorithmic level concerns how-questions; what kinds of questions occupy the space in between? (see also Zednik 2016).

  14. For example, if investigators seek to test the Bayesian Coding Hypothesis they will often manipulate the basic building blocks of the generic algorithm, namely likelihoods, priors, and cost functions, and attempt to predict a subject’s performance under these manipulations (e.g. Battaglia et al. 2013; Houlsby et al. 2013; Maloney and Mamassian 2009). They may also search for neural correlates of likelihoods, priors, and cost functions (e.g. Berkes et al. 2011; Vilares et al. 2012). Other hypotheses will be tested differently.

  15. As these three algorithms only approximate the ideal observer, selecting them as algorithmic-level hypotheses simultaneously invokes the suboptimality tweak at the computational-level. Each of the three different approximations makes a concrete proposal for answering the how-question at the algorithmic level. But since they do not compute exactly the same function, they also show a difference in the observable behavior and therefore each answers the what-question at the computational level slightly differently (e.g. by showing different order effects). It is, however, still the case that they answer the why-question at the computational level in the same way, because they are all considered approximations to the untweaked ideal observer (but see Kwisthout and Rooij 2013).

  16. Another class of algorithms used for the same purpose, variational inference (Beal 2003), has proven similarly useful for the purposes of Bayesian reverse-engineering (e.g. Friston 2008; Sanborn and Silva 2013).

  17. Psychological considerations may also already enter at the computational level through the added-limitations tweak discussed in Sect. 2.3.1. Take the bar-categorization example: Adding noise to the stimulus representation of the decision-criterion algorithm is plausible because it is known that subjects’ discrimination ability is limited.

  18. Although Bayesian reverse-engineering is a “top-down” research strategy in the sense discussed previously, regular use of the plausible-algorithms heuristic shows how it might be combined with “bottom-up” approaches that start with cognitive or neural principles. In particular, one might seek to determine exactly how an established cognitive architecture (such as ACT-R) could bring about an ideal observer’s optimal performance (Cooper and Peebles 2015; Thomson and Lebiere 2013). In general, being a proponent of top-down research strategies does not entail a rejection of bottom-up strategies. In this context, it is also worthwhile to remember that the same John Anderson who developed rational analysis also developed the ACT-R architecture and that the ‘R’ stands for ‘rational’.

  19. Just like the relation between the computational and the algorithmic level (see footnotes 13 and 18), the relation between algorithmic and implementational level is also not completely “top down”. Hypotheses at the algorithmic level are often chosen because they are easy to map onto neural structures and processes. In the case of Monte Carlo sampling the algorithm definitely preceded the possible implementation (Fiser et al. 2010). However, in the case of log-probabilities the ease with which Bayesian updating could be implemented in neurons is very likely to have influenced the choice of representation at the algorithmic level (Ma et al. 2006).

  20. For example, the assumption that many biological systems are modular (and thus, that scientific discovery problems in biology should be solved by focusing on modular solutions) might be justified by appealing to the principle from evolutionary theory that modular systems are more robust, and thus more likely to survive and reproduce, as compared to non-modular systems (Simon 1996).

  21. This point is widely recognized in the literature. For example, Tenenbaum et al. (2011, p. 1279) argue that “What has come to be known as the ‘Bayesian’ or ‘probabilistic’ approach to reverse-engineering the mind has been heavily influenced by the engineering successes of Bayesian artificial intelligence and machine learning over the past two decades”. In recognition of this influence, many introductions to Bayesian rational analysis and Bayesian reverse-engineering focus explicitly on the presentation of specific mathematical methods, computational tools, and on the interdisciplinary utility of Bayesian statistical inference (see e.g. Griffiths et al. 2008).

  22. As has already been suggested, the novelty of Bayesian reverse-engineering lies not in the basic methodological principles being invoked, but only in the novel use of specific mathematical concepts and methods. Specifically, Bayesian reverse-engineering is uniquely “Bayesian” in exactly two ways. First, its starting point is the method of Bayesian rational analysis, which aims to describe different kinds of behavior and cognition as solutions to problems of probabilistic inference. Second, the solutions Bayesian reverse-engineers are most likely to discover are those that reflect the solutions being found in the interdisciplinary research community of Bayesian artificial intelligence, machine learning, and statistics.

  23. For example, whereas category learning might best be described using particle filters at the algorithmic level (Sanborn et al. 2010), perceptual decision-making might be better described with a decision criterion (Stüttgen et al. 2013), categorical perception with an exemplar model (Shi et al. 2010), and theory learning as stochastic search (Ullman et al. 2012). Even though in each of these domains the method of Bayesian rational analysis was used at the computational level (and in this sense, contributing to a sense of mathematical unification: Colombo and Hartmann 2015), the resulting algorithmic-level models developed by Bayesian reverse-engineers are strikingly different, and possibly even incompatible.

  24. For example, if a subject engaged in the bar-categorization task exhibits a sloping psychometric function instead of the optimal threshold function, one researcher might prefer an added-limitations tweak, whereas another applies a suboptimality tweak. Both tweaks allow researchers to capture the same psychometric function by positing very different ideal observers.

References

  • Acerbi, L., Vijayakumar, S., & Wolpert, D. M. (2014). On the origins of suboptimality in human probabilistic inference. PLoS Computational Biology, 20(6), 1–23.

    Google Scholar 

  • Anderson, B. L. (2015). Can computational goals inform theories of vision? Topics in Cognitive Science, 7, 274–286.

    Article  Google Scholar 

  • Anderson, J. R. (1978). Arguments concerning representations for mental imagery. Psychological Review, 85, 249–277.

    Article  Google Scholar 

  • Anderson, J. R. (1991a). Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471–517.

    Article  Google Scholar 

  • Anderson, J. R. (1991b). The adaptive nature of human categorization. Psychological Review, 98(3), 409–429.

    Article  Google Scholar 

  • Andrieu, C., De Freitas, N., Doucet, A., & Jordan, M. I. (2003). An introduction to MCMC for machine learning. Machine Learning, 50, 5–43.

    Article  Google Scholar 

  • Ashby, F. G., & Alfonso-Reese, L. A. (1995). Categorization as probability density estimation. Journal of Mathematical Psychology, 39, 216–233.

    Article  Google Scholar 

  • Ashby, F. G., & Gott, R. E. (1988). Decision rules in the perception and categorization of multidimensional stimuli. Journal of Experimental Psychology: Learning, Memory and Cognition, 14(1), 33–53.

    Google Scholar 

  • Battaglia, P. W., Hamrick, J., & Tenenbaum, J. B. (2013). Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences USA, 110(45), 18327–18332.

    Article  Google Scholar 

  • Beal, M. J. (2003). Variational algorithms for approximate Bayesian inference. PhD thesis, The Gatsby Computational Neuroscience Unit, University College London.

  • Berkes, P., Orbán, G., Lengyel, M., & Fiser, J. (2011). Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science, 331, 83–87.

    Article  Google Scholar 

  • Berniker, M., & Körding, K. P. (2011). Bayesian approaches to sensory integration for motor control. WIREs Cognitive Science, 2, 419–428.

    Article  Google Scholar 

  • Bowers, J. S., & Davis, C. J. (2012a). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138(3), 389–414.

    Article  Google Scholar 

  • Bowers, J. S., & Davis, C. J. (2012). Is that what Bayesians believe? Reply to Griffiths, Chater, Norris, and Pouget (2012). Psychological Bulletin, 138(3), 423–426.

    Article  Google Scholar 

  • Brunswik, E. (1943). Organismic achievement and environmental probability. Psychological Review, 50, 255–272.

    Article  Google Scholar 

  • Chater, N., Goodman, N., Griffiths, T. L., Kemp, C., Oaksford, M., & Tenenbaum, J. B. (2011). The imaginary fundamentalists: The unshocking truth about Bayesian cognitive science. Behavioral and Brain Sciences, 34(4), 194–196.

    Article  Google Scholar 

  • Chater, N., Tenenbaum, J. B., & Yuille, A. (2006). Probabilistic models of cognition: Conceptual foundations. Trends in Cognitive Sciences, 10, 287–291.

    Article  Google Scholar 

  • Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.

    Article  Google Scholar 

  • Colombo, M., & Hartmann, S. (2015). Bayesian cognitive science, unification, and explanation.The British Journal for the Philosophy of Science [Epub ahead of print]. doi:10.1093/bjps/axv036.

  • Colombo, M., & Seriès, P. (2012). Bayes in the brain—On Bayesian modelling in neuroscience. The British Journal for the Philosophy of Science, 63(3), 697–723.

    Article  Google Scholar 

  • Cooper, R. P., & Peebles, D. (2015). Beyond single-level accounts: The role of cognitive architectures in cognitive scientific explanations. Topics in Cognitive Science, 7, 243–258.

    Article  Google Scholar 

  • Cummins, R. (1983). The nature of psychological explanation. Cambridge: MIT Press.

    Google Scholar 

  • Danks, D. (2008). Rational analyses, instrumentalism, and implementations. In N. Chater & M. Oaksford (Eds.), The probabilistic mind: Prospects for rational models of cognition (pp. 59–75). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Danks, D., & Eberhardt, F. (2009). Explaining norms and norms explained. Behavioral and Brain Sciences, 32(1), 86–87.

    Article  Google Scholar 

  • Davies, N. B., Krebs, J. R., & West, S. A. (2012). An introduction to behavioural ecology. Chichester, UK: Wiley-Blackwell.

    Google Scholar 

  • Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dennett, D. (1987). The intentional stance. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dennett, D. (1994). Cognitive science as reverse engineering: Several meanings of ‘top-down’ and ‘bottom-up’. In D. Prawitz, B. Skyrms, & D. Westerstahl (Eds.), Logic, methodology & philosophy of science IX (pp. 679–689). Amsterdam: Elsevier Science.

    Google Scholar 

  • Dorfman, D. D., & Biderman, M. (1971). A learning model for a continuum of sensory states. Journal of Mathematical Psychology, 8, 264–284.

    Article  Google Scholar 

  • Efron, B. (2013). A 250-year argument: Belief, behavior, and the bootstrap. Bulletin of the American Mathematical Society, 50(1), 129–146.

    Article  Google Scholar 

  • Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. The Quarterly Journal of Economics, 75(4), 643–669.

    Article  Google Scholar 

  • Ernst, M. O., & Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433.

    Article  Google Scholar 

  • Fiser, J., Berkes, P., Orbán, G., & Lengyel, M. (2010). Statistically optimal perception and learning: From behavior to neural representations. Trends in Cognitive Sciences, 14(3), 119–130.

    Article  Google Scholar 

  • Frank, M. C. (2013). Throwing out the Bayesian baby with the optimal bathwater: Response to Endress (2013). Cognition, 128, 417–423.

    Article  Google Scholar 

  • Fried, L. S., & Holyoak, K. J. (1984). Induction of category distributions: A framework for classification learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 234–257.

    Google Scholar 

  • Friston, K. (2008). Hierarchical models in the brain. PLoS Computational Biology, 4(11), 1–24.

    Article  Google Scholar 

  • Geisler, W. S. (1989). Sequential ideal-observer analysis of visual discrimination. Psychological Review, 96(2), 267–314.

    Article  Google Scholar 

  • Gershman, S. J., Horvitz, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349, 273–278.

    Article  Google Scholar 

  • Gigerenzer, G. (1991). From tools to theories: A heuristic of discovery in cognitive psychology. Psychological Review, 98(2), 254–267.

    Article  Google Scholar 

  • Gold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574.

    Article  Google Scholar 

  • Goodman, N., Frank, M. C., Griffiths, T. L., Tenenbaum, J. B., Battaglia, P., & Hamrick, J. (2015). Relevant and robust. A response to Marcus and Davis (2013). Psychological Science, 26(4), 539–541.

    Article  Google Scholar 

  • Green, D. M., & Swets, J. A. (1988). Signal detection and psychophysics (reprint ed.). Los Altos, CA: Peninsula Publishing.

    Google Scholar 

  • Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences, 14(8), 357–364.

    Article  Google Scholar 

  • Griffiths, T. L., Chater, N., Norris, D., & Pouget, A. (2012a). How the Bayesians got their beliefs (and what those beliefs actually are): Comment on Bowers and Davis (2012). Psychological Bulletin, 138(3), 415–422.

  • Griffiths, T. L., Kemp, C., & Tenenbaum, J. B. (2008). Bayesian models of cognition. In R. Sun (Ed.), The Cambridge handbook of computational cognitive modeling. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science, 7, 217–229.

    Article  Google Scholar 

  • Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9), 767–773.

    Article  Google Scholar 

  • Griffiths, T. L., Vul, E., & Sanborn, A. N. (2012b). Bridging levels of analysis for probabilistic models of cognition. Psychological Science, 21, 263–268.

    Google Scholar 

  • Hacking, I. (1975). The emergence of probability. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Hahn, U. (2014). The Bayesian boom: Good thing or bad? Frontiers in Psychology, 5, 1–12.

    Article  Google Scholar 

  • Houlsby, N. M. T., Huszár, F., Ghassemi, M. M., Orbán, G., Wolpert, D. M., & Lengyel, M. (2013). Cognitive tomography reveals complex, task-independent mental representations. Current Biology, 23, 2169–2175.

    Article  Google Scholar 

  • Jäkel, F., Wichmann, F. A., & Schölkopf, B. (2009). Does cognitive science need kernels? Trends in Cognitive Sciences, 13, 381–388.

    Article  Google Scholar 

  • Jones, M., & Love, B. C. (2011). Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34, 169–231.

    Article  Google Scholar 

  • Kac, M. (1969). Some mathematical models in science. Science, 166(3906), 695–699.

    Article  Google Scholar 

  • Kass, R. E. (2011). Statistical inference: The big picture. Statistical Science, 26(1), 1–9.

    Article  Google Scholar 

  • Kersten, D., Mamassian, P., & Yuille, A. (2004). Object perception as Bayesian inference. Annual Review of Psychology, 55, 271–304.

    Article  Google Scholar 

  • Kersten, D., & Schrater, P. R. (2002). Pattern inference theory: A probabilistic approach to vision. In R. Mausfeld & D. Heyer (Eds.), Perception and the physical world. Chichester: Wiley.

    Google Scholar 

  • Knill, D. C., & Pouget, A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Cognitive Sciences, 27(12), 712–719.

    Google Scholar 

  • Körding, K. P., & Wolpert, D. M. (2004). Bayesian integration in sensorimotor learning. Nature, 427(6971), 244–247.

    Article  Google Scholar 

  • Kruschke, J. K. (1992). ALCOVE: An exemplar-based connectionist model of category learning. Psychological Review, 99, 22–44.

    Article  Google Scholar 

  • Kruschke, J. K. (2006). Locally Bayesian learning with applications to retrospective revaluation and highlighting. Psychological Review, 113, 677–699.

    Article  Google Scholar 

  • Kwisthout, J., & van Rooij, I. (2013). Bridging the gap between theory and practice of approximate Bayesian inference. Cognitive Systems Research, 24, 2–8.

    Article  Google Scholar 

  • Love, B. C. (2015). The algorithmic level is the bridge between computation and brain. Topics in Cognitive Science, 7, 240–242.

    Article  Google Scholar 

  • Ma, W. J., Beck, J. M., Latham, P. E., & Pouget, A. (2006). Bayesian inference with probabilistic population codes. Nature Neuroscience, 9(11), 1432–1438.

    Article  Google Scholar 

  • Maloney, L. T., & Mamassian, P. (2009). Bayesian decision theory as a model of human visual perception: Testing Bayesian transfer. Visual Neuroscience, 26, 147–155.

    Article  Google Scholar 

  • Marcus, G. F., & Davis, E. (2013). How robust are probabilistic models of higher-level cognition? Psychological Science, 24(12), 2351–2360.

    Article  Google Scholar 

  • Marcus, G. F., & Davis, E. (2015). Still searching for principles: A response to Goodman et al. (2015). Psychological Science, 26(4), 542–544.

    Article  Google Scholar 

  • Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco: W. H. Freeman.

    Google Scholar 

  • McClamrock, R. (1991). Marr’s three levels: A re-evaluation. Minds and Machines, 1, 185–196.

    Article  Google Scholar 

  • McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T. T., & Seidenberg, M. S. (2010). Letting structure emerge: Connectionist and dynamical systems approaches to cognition. Trends in Cognitive Sciences, 14(8), 348–356.

    Article  Google Scholar 

  • Milkowski, M. (2013a). Explaining the computational mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Milkowski, M. (2013b). Reverse engineering in cognitive science. In M. Milkowski & K. Talmont-Kaminski (Eds.), Regarding the mind, naturally: Naturalist approaches to the sciences of the mental (pp. 12–29). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Google Scholar 

  • Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology, 115, 39–57.

    Article  Google Scholar 

  • Oaksford, M., & Chater, N. (2001). The probabilistic approach to human reasoning. Trends in Cognitive Sciences, 5(8), 349–357.

    Article  Google Scholar 

  • Oaksford, M., & Chater, N. (2007). Bayesian rationality: The probabilistic approach to human reasoning. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Ostwald, D., Spitzer, B., Guggenmos, M., Schmidt, T. T., Kiebel, S. J., & Blankenburg, F. (2012). Evidence for neural encoding of Bayesian surprise in human somatosensation. NeuroImage, 62(1), 177–188.

    Article  Google Scholar 

  • Parker, A. J., & Newsome, W. T. (1998). Sense and the single neuron: Probing the physiology of perception. Annual Review of Neuroscience, 21, 227–277.

    Article  Google Scholar 

  • Peterson, C. R., & Beach, L. R. (1967). Man as an intuitive statistician. Psychological Bulletin, 68(1), 29–46.

    Article  Google Scholar 

  • Pouget, A., Beck, J. M., Ma, W. J., & Latham, P. E. (2013). Probabilistic brains: Knowns and unknowns. Nature Neuroscience, 16(9), 1170–1178.

    Article  Google Scholar 

  • Rosas, P., Wagemans, J., Ernst, M. O., & Wichmann, F. A. (2005). Texture and haptic cues in slant discrimination: Reliability-based cue weighting without statistically optimal cue combination. Journal of the Optical Society of America A, 22(5), 801–809.

    Article  Google Scholar 

  • Rosas, P., Wichmann, F. A., & Wagemans, J. (2007). Texture and object motion in slant discrimination: Failure of reliability-based weighting of cues may be evidence for strong fusion. Journal of Vision, 7(6), 1–12.

    Article  Google Scholar 

  • Rothkopf, C. A., & Ballard, D. H. (2013). Modular inverse reinforcement learning for visuomotor behavior. Biological Cybernetics, 107(4), 477–490.

    Article  Google Scholar 

  • Salmon, W. (1989). Four decades of scientific explanation. Pittsburgh: Pittsburgh University Press.

    Google Scholar 

  • Sanborn, A., Griffiths, T. L., & Navarro, D. J. (2010). Rational approximations to rational models: Alternative algorithms for category learning. Psychological Review, 117(4), 1144–1167.

    Article  Google Scholar 

  • Sanborn, A., & Silva, R. (2013). Constraining bridges between levels of analysis: A computational justification for locally Bayesian learning. Journal of Mathematical Psychology, 57, 94–106.

    Article  Google Scholar 

  • Sanborn, A. N., Griffiths, T. L., & Shiffrin, R. M. (2010). Uncovering mental representations with Markov chain Monte Carlo. Cognitive Psychology, 60, 63–106.

    Article  Google Scholar 

  • Savage, L. J. (1972). The foundations of statistics. Mineola, NY: Dover (original work published 1954).

  • Shagrir, O. (2010). Marr on computational-level theories. Philosophy of Science, 77(4), 477–500.

    Article  Google Scholar 

  • Shi, L., Griffiths, T. L., Feldman, N. H., & Sanborn, A. N. (2010). Exemplar models as mechanisms for performing Bayesian inference. Psychonomic Bulletin & Review, 17(4), 443–464.

    Article  Google Scholar 

  • Simon, H. A. (1996). The sciences of the artificial (3rd ed.). Cambridge, MA: MIT Press.

    Google Scholar 

  • Simon, H. A., Langley, P. W., & Bradshaw, G. L. (1981). Scientific discovery as problem solving. Synthese, 47(1), 1–27.

    Article  Google Scholar 

  • Simoncelli, E. P. (2003). Vision and the statistics of the visual environment. Current Opinion in Neurobiology, 13, 144–149.

    Article  Google Scholar 

  • Stocker, A. A., & Simoncelli, E. P. (2006). Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, 9(4), 578–585.

    Article  Google Scholar 

  • Stüttgen, M. C., Kasties, N., Lengersdorf, D., Starosta, S., Güntürkün, O., & Jäkel, F. (2013). Suboptimal criterion setting in a perceptual choice task with asymmetric reinforcement. Behavioral Processes, 96, 59–70.

    Article  Google Scholar 

  • Stüttgen, M. C., Schwarz, C., & Jäkel, F. (2011). Mapping spikes to sensations. Frontiers in Neuroscience, 5(125), 1–17.

    Google Scholar 

  • Swets, J., Tanner, W. P., & Birdsall, T. G. (1961). Decision processes in perception. Psychological Review, 68, 301–340.

    Article  Google Scholar 

  • Swets, J. A. (2010). Tulips to thresholds. Los Altos Hills, CA: Peninsula Publishing.

    Google Scholar 

  • Tanner, W. P. (1961). Physiological implications of psychophysical data. Annals of the New York Academy of Sciences, 89, 752–765.

    Article  Google Scholar 

  • Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.

    Article  Google Scholar 

  • Thomas, E. A. C. (1973). On a class of additive learning models: Error-correcting and probability matching. Journal of Mathematical Psychology, 10, 241–264.

    Article  Google Scholar 

  • Thomson, R., & Lebiere, C. (2013). Constraining Bayesian inference with cognitive architectures: An updated associative learning mechanism in ACT-R. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society (pp. 318–362). Austin, TX: Cognitive Science Society.

    Google Scholar 

  • Tversky, A., & Kahneman, D. (1974). Judgments under uncertainty. Heuristics and biases. Science, 185, 1124–1131.

    Article  Google Scholar 

  • Ullman, T. D., Goodman, N. D., & Tenenbaum, J. B. (2012). Theory learning as stochastic search in the language of thought. Cognitive Development, 27, 455–480.

    Article  Google Scholar 

  • Vilares, I., Howard, J. D., Fernandes, H. L., Gottfried, J. A., & Körding, K. P. (2012). Differential representations of prior and likelihood uncertainty in the human brain. Current Biology, 22, 1–8.

    Article  Google Scholar 

  • Vilares, I., & Körding, K. P. (2011). Bayesian models: The structure of the world, uncertainty, behavior, and the brain. Annals of the New York Academy of Sciences, 1224, 22–39.

    Article  Google Scholar 

  • Vul, E., Goodman, N., Griffiths, T. L., & Tenenbaum, J. B. (2014). One and done? Optimal decisions from very few samples. Cognitive Science, 38, 599–637.

    Article  Google Scholar 

  • Wimsatt, W. C. (1985). Heuristics and the study of human behavior. In D. W. Fiske & R. Shweder (Eds.), Metatheory in social science: Pluralisms and subjectivities (pp. 293–314). Chicago: University of Chicago Press.

    Google Scholar 

  • Yang, Z., & Purves, D. (2003). A statistical explanation of visual space. Nature Neuroscience, 6(6), 632–640.

    Article  Google Scholar 

  • Zednik, C. (2011). The nature of dynamical explanation. Philosophy of Science, 78(2), 238–263.

    Article  Google Scholar 

  • Zednik, C. (2016). Cognitive mechanisms. In S. Glennan & P. Illari (Eds.), The Routledge handbook of mechanisms and mechanical philosophy. London: Routledge.

  • Zednik, C., & Jäkel, F. (2014). How does Bayesian reverse-engineering work? In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th annual conference of the Cognitive Science Society (pp. 666–671). Austin, TX: Cognitive Science Society.

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank Cameron Buckner, Tomer Ullman, and Felix Wichmann for comments on an earlier draft of this paper. A preliminary version of this work was presented at the Annual Conference of the Cognitive Science Society (Zednik and Jäkel 2014), as well as at workshops and colloquia in Berlin, Cortina d’Ampezzo, Leiden, Osnabrück, Rauischholzhausen, and Tilburg.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos Zednik.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zednik, C., Jäkel, F. Bayesian reverse-engineering considered as a research strategy for cognitive science. Synthese 193, 3951–3985 (2016). https://doi.org/10.1007/s11229-016-1180-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-016-1180-3

Keywords

Navigation