Rational analysis, intractability, and the prospects of ‘as if’-explanations


The plausibility of so-called ‘rational explanations’ in cognitive science is often contested on the grounds of computational intractability. Some have argued that intractability is a pseudoproblem, however, because cognizers do not actually perform the rational calculations posited by rational models; rather, they only behave as if they do. Whether or not the problem of intractability is dissolved by this gambit critically depends, inter alia, on the semantics of the ‘as if’ connective. First, this paper examines the five most sensible explications in the literature, and concludes that none of them actually circumvents the problem. Hence, rational ‘as if’ explanations must obey the minimal computational constraint of tractability. Second, this paper describes how rational explanations could satisfy the tractability constraint. Our approach suggests a computationally unproblematic interpretation of ‘as if’ that is compatible with the original conception of rational analysis.

This is a preview of subscription content, log in to check access.


  1. 1.

    Note that Marr saw ‘what’-questions as intimately tied to ‘why’-questions: e.g., why is \(f\) the appropriate function for \(\phi \) to realize? An answer to such questions would presuppose a specification of the conditions for appropriateness. We’ll later return to this point when addressing rational explanation.

  2. 2.

    Such cases of underdetermination of theory by evidence themselves do not determine any stronger antirealist conclusion. An alternative lesson to be drawn is just that cognitive scientists taking top-down explanatory approaches may benefit from theoretical constraints on computational-level theories (van Rooij 2008).

  3. 3.

    A common presumption is that such \(f\)s not only characterize the ‘what’ of a capacity \(\phi \), i.e., \(f\) itself, but also provide the grounds for understanding why \(f\) is what characterizes \(\phi \). Of course, to presume that \(f\) is what \(\phi \) does because \(f\) is optimal for the goals \(G\) of \(\phi \) in environment \(E\) is not to claim that \(f\) itself explains why \(f\) is the function that characterizes \(\phi \). Rather, the ‘why’ explanation is a narrative that \(f\) is as it is because of the principle of rationality and the assumptions that \(\phi \) has goals \(G\) and the environment of adaptation is \(E\). Moreover, to genuinely be a ‘why’ explanation, rational analysts should not only show that \(f\) is optimal for \(G\) and \(E\), but also that \(f\) is as it is because it is optimal for \(G\) and \(E\) (Danks 2008). Because this difference between \(f\) as a computational-level description of \(\phi \) and the rational narrative surrounding and motivating \(f\) is seldomly made explicit, cognitive scientists are sometimes led to the mistaken idea that \(f\)s derived via rational analysis are rational explanations in the sense of explaining the ‘why’ of \(f\). In our view, the distinction should be made explicit and the ‘why’ explanation should be differentiated from the computational-level \(f\) itself (cf. Danks 2013). After all, even if the rational narrative were falsified, \(f\) could still correctly characterize \(\phi \); the correctness of that characterization is independent of the truth-value of the rational story about how \(f\) came to be.

  4. 4.

    NP-hard functions owe their name to being as hard as any function in the class NP, where ‘NP’ abbreviates ‘Nondeterministic Polynomial time’.

  5. 5.

    This claim assumes P \(\ne \) NP, a widely-endorsed conjecture in theoretical computer science (Garey and Johnson 1979; Fortnow 2009).

  6. 6.

    This point cannot be overemphasized. The classification of a function \(f\) as intractable does not come lightly. For even were there to exist a proof that some such tractable algorithm exists, then even if we could know nothing else about it, we would be led to the classification of \(f\) as tractable.

  7. 7.

    Note that, here, Chater and Oaksford do use the term ‘calculation’ to refer to the computational process involved in determining the output of some \(\phi \), in this case perceptual organization. Perhaps ‘calculation’ is intended in a broad sense, meaning ‘computation’ and not being synonymous with ‘rational calculation’.

  8. 8.

    Otherwise, \(H\) would be an exact algorithm, in which case the earlier problems noted in §2.12.3 would reoccur.

  9. 9.

    For instance, it may be believed that \(f(i)\) and \(f_H(i)\) are the same for many inputs \(i \in I = I_H\), or it may be conjectured that the difference between \(f(i)\) and \(f_H(i)\) is small for many inputs \(i \in I\).

  10. 10.

    Otherwise, \(f\) would not be intractable (Schöning 1990; van Rooij et al. 2012).

  11. 11.

    We thank an anonymous reviewer for raising this possibility: even if it is \(f_H\) rather than \(f\) that accurately characterizes the capacity of interest \(\phi \), in practice \(f_H\) may be unknown; so couldn’t rational analysts contend that the appeal to ‘as if’ merely serves as a sort of ‘promissory note’? That is, until we determine what \(f_H\) is, \(f\) can serve as a (instrumentalist) working hypothesis that allows research to productively continue. This may be the case. We don’t contest that intractable \(f\)s can instrumentally lead to important results on occasion. For instance, postulating intractable \(f\)s raises scientifically fruitful questions about the conditions under which those functions may be tractable, and answering those questions may lead to (realist) hypotheses about \(f_H\). What we do contest is the claim that the intractability of \(f\) is rendered permanently irrelevant by an appeal to ‘as if’. Whatever instrumentalist commitments are invoked, it’s still the case that, in order for the computational-level theory to be computationally plausible and explanatory, at some point in time—sooner or later—\(f_H\) needs to be determined.

  12. 12.

    This claim assumes NP \(\not \subseteq \) BPP, a widely-endorsed conjecture in theoretical computer science (see Johnson 1990, p. 120 and Zachos 1986, p. 396), where ‘BPP’ abbreviates Bounded-error Probabilistic Polynomial time.

  13. 13.

    A function \(f\) is self-paddable if and only if a set of instances \(i_1, i_2,\ldots , i_m\) of \(f\) can be embedded in a single instance \(i_E\) of \(f\) such that \(f(i_1), f(i_2),\ldots ,f(i_m)\) can be derived from \(f(i_E)\). For more details, see Definition 6 in van Rooij and Wareham (2012).

  14. 14.

    The formal tools for putting this type of revisionary approach into practice have been extensively described by van Rooij (2008) (see also Blokpoel et al. 2013; van Rooij and Wareham 2008), and builds on the the mathematical theory of parameterized complexity (Downey and Fellows 1999). Using proof techniques from this mathematical theory, it can be shown that some intractable (NP-hard) functions \(f: I \rightarrow O\) can be computed in fixed-parameter (fp-) tractable time \(O(g(K)|i|^c)\), i.e., where \(g\) can be any function of the parameters \(k_1, k_2,\ldots , k_m\) in set \(K = \{k_1, k_2,\ldots , k_m\}\), \(|i|\) denotes the input size, and \(c\) is a constant. Note that in such event, the intractable \(f\) can be computed efficiently (in polynomial-time), even for large inputs, provided the assumption that \(f\) operates only on inputs in which the parameters in \(K\) are restricted to relatively small values (each \(k << |i|\)). If rational analysts were to have theoretical and/or empirical reasons for this assumption, then revising \(f\) to \(f'\)—where \(f'\) is \(f\) restricted to inputs with small values for parameter \(k_1, k_2,\ldots , k_m\)—would yield a tractable function \(f'\) that will be rational according to the rational analysis that yielded it as \(f\).


  1. Aaronson, S. (2005). NP-complete problems and physical reality. SIGACT News, 36, 30–52.

  2. Aaronson, S. (2008). The limits of quantum computers. Scientific American, 298, 62–69.

    Article  Google Scholar 

  3. Abdelbar, A. M., & Hedetniemi, S. M. (1998). Approximating MAPs for belief networks is NP-hard and other theorems. Artificial Intelligence, 102, 21–38.

    Article  Google Scholar 

  4. Anderson, J. R., & Matessa, M. (1990). A rational analysis of categorization. In B. Porter & R. Mooney (Eds.), Proceedings of the 7th international workshop on machine learning (pp. 76–84). San Francisco: Morgan Kaufmann.

    Google Scholar 

  5. Anderson, J. R. (1990). The adaptive character of thought. Hillsdale: Lawrence Erlbaum Associates Inc.

    Google Scholar 

  6. Anderson, J. R. (1991a). The adaptive nature of human categorization. Psychological Review, 98, 409–429.

    Article  Google Scholar 

  7. Anderson, J. R. (1991b). Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471–517.

  8. Anderson, J. R. (1991c). The place of cognitive architectures in a rational analysis. In K. Van Lehn (Ed.), Architectures for intelligence (pp. 1–24). Hillsdale: Erlbaum.

    Google Scholar 

  9. Arora, S. (1998). The approximability of NP-hard problems. Proceedings of the 30th annual symposium on the theory of computing (pp. 337–348). New York: ACM Press.

    Google Scholar 

  10. Blokpoel, M., Kwisthout, J., van der Weide, T. P., Wareham, T., & van Rooij, I. (2013). A computational-level explanation of the speed of goal inference. Journal of Mathematical Psychology, 57, 117–133.

  11. Bournez, O., & Campagnolo, M. L. (2008). A survey of continuous-time computation. In S. B. Cooper, B. Löwe, & A. Sorbi (Eds.), New computational paradigms: Changing conceptions of what is computable (pp. 383–423). Berlin: Springer.

    Google Scholar 

  12. Chater, N., & Oaksford, M. (1990). Autonomy, implementation, and cognitive architecture: A reply to Fodor and Pylyshyn. Cognition, 34, 93–107.

    Article  Google Scholar 

  13. Chater, N., & Oaksford, M. (1999). Ten years of the rational analysis of cognition. Trends in Cognitive Sciences, 3, 57–65.

  14. Chater, N., & Oaksford, M. (2000). The rational analysis of mind and behavior. Synthese, 122, 93–131.

    Article  Google Scholar 

  15. Chater, N., & Oaksford, M. (2001). Human rationality and the psychology of reasoning: Where do we go from here? British Journal of Psychology, 92, 193–216.

    Article  Google Scholar 

  16. Chater, N., & Oaksford, M. (2008). The probabilistic mind: Prospects for Bayesian cognitive science. Oxford: Oxford University Press.

  17. Chater, N., Oaksford, M., Nakisa, R., & Redington, M. (2003). Fast, frugal, and rational: How rational norms explain behavior. Organizational Behavior and Human Decision Processes, 90, 63–86.

    Article  Google Scholar 

  18. Chater, N., Tenenbaum, J. B., & Yuille, A. (2006). Probabilistic models of cognition. Trends in Cognitive Science, 10, 287–293.

    Article  Google Scholar 

  19. Cooper, G. F. (1990). The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence, 42, 393–405.

    Article  Google Scholar 

  20. Cotogno, P. (2003). Hypercomputation and the physical Church-Turing Thesis. British Journal of Philosophy of Science, 54, 181–223.

    Article  Google Scholar 

  21. Dagum, P., & Luby, M. (1993). Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artificial Intelligence, 60, 141–153.

  22. Danks, D. (2008). Rational analyses, instrumentalism, and implementations. In N. Chater & M. Oaksford (Eds.), The probabilistic mind: Prospects for rational models of cognition (pp. 59–75). Oxford: Oxford University Press.

    Google Scholar 

  23. Danks, D. (2013). Moving from levels and reduction to dimensions anand constraints. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual conference of the cognitive science society (pp. 2124–2129). Oxford: Oxford University Press.

    Google Scholar 

  24. Davis, M. (2004). The myth of hypercomputation. In C. Tuescher (Ed.), Alan Turing: Life and legacy of a great thinker (pp. 195–211). Berlin: Springer.

  25. Dennett, D. C. (1994). Cognitive science as reverse engineering: several meanings of ‘top down’ and ‘bottom up’. In D. Prawitz, B. Skyrms, & D. Westerstahl (Eds.), Logic, methodology, and philosophy of science IX (pp. 679–689). Amsterdam: Elsevier Science.

    Google Scholar 

  26. Downey, R. G., & Fellows, M. R. (1999). Parameterized complexity. New York: Springer.

    Google Scholar 

  27. Ellis, N. C. (2006). Language acquisition as rational contingency learning. Applied Linguistics, 27, 1–24.

    Article  Google Scholar 

  28. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3–71.

    Article  Google Scholar 

  29. Fortnow, L. (2009). The status of the P versus NP problem. Communications of the ACM, 52, 78–86.

    Article  Google Scholar 

  30. Garey, M., & Johnson, D. (1979). Computers and intractability. A guide to the theory of NP-completeness. San Francisco: W. H. Freeman & Co.

  31. Gigerenzer, G. (2004). Fast and frugal heuristics: The tools of bounded rationality. In D. Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making (pp. 62–88). Malden: Blackwell.

    Google Scholar 

  32. Goodman, N. D., Tenenbaum, J. B., Feldman, J., & Griffiths, T. L. (2008). A rational analysis of rule-based concept learning. Cognitive Science, 32, 108–154.

    Article  Google Scholar 

  33. Gray, W. D., Sims, C. R., Fu, W., & Schoelles, M. J. (2006). The soft constraints hypothesis: A rational analysis approach to resource allocation for interactive behavior. Psychological Review, 113, 461–482.

    Article  Google Scholar 

  34. Griffiths, T. L., Vul, E., & Sanborn, A. N. (2012). Bridging levels of analysis for probabilistic models of cognition. Current Directions in Psychological Science, 21, 263–268.

    Article  Google Scholar 

  35. Johnson, D. (1990). A catalog of complexity classes. In J. van Leeuwen (Ed.), Handbook of theoretical computer science; volume A: Algorithms and complexity (pp. 67–161). Cambridge: MIT Press.

    Google Scholar 

  36. Kwisthout, J. (2011). Most probable explanations in Bayesian networks: Complexity and tractability. International Journal of Approximate Reasoning, 52, 1452–1469.

    Article  Google Scholar 

  37. Kwisthout, J., & van Rooij, I. (2013). Bridging the gap between theory and practice of approximate Bayesian inference. Cognitive Systems Research, 24, 2–8.

    Article  Google Scholar 

  38. Kwisthout, J., Wareham, T., & van Rooij, I. (2011). Bayesian intractability is not an ailment that approximation can cure. Cognitive Science, 35, 779–784.

    Article  Google Scholar 

  39. Marr, D. (1982). Vision: A computational investigation into the human representation and processing visual information. San Francisco: W. H. Freeman & Co.

    Google Scholar 

  40. Nayebi, A. (2014). Practical intractability: A critique of the hypercomputation movement. Minds and Machines, 24, 275–305.

  41. Ngo, J. T., Marks, J., & Karplus, M. (1994). Computational complexity, protein structure prediction, and the Levinthal paradox. In K. Merz Jr & S. Le Grand (Eds.), The protein folding problem and tertiary structure prediction (pp. 433–506). Boston: Birkhauser.

    Google Scholar 

  42. Norris, D. (2006). The Bayesian reader: Explaining word recognition as an optimal Bayesian decision process. Psychological Review, 113, 327–357.

    Article  Google Scholar 

  43. Oaksford, M., & Chater, N. (1998). Rationality in an uncertain world: Essays on the cognitive science of human reasoning. Sussex: Psychology Press.

    Google Scholar 

  44. Oaksford, M., & Chater, N. (2009). Précis of Bayesian rationality: The probabilistic approach to human reasoning. Behavioral and Brain Sciences, 32, 69–120.

  45. Piccinini, G. (2011). The physical Church-Turing thesis: Modest or bold? British Journal of Philosophy of Science, 62, 733–769.

    Article  Google Scholar 

  46. Sanborn, A. N., Griffiths, T. L., & Navarro, D. J. (2010). Rational approximations to rational models: Alternative algorithms for category learning. Psychological Review, 117, 1144–1167.

  47. Schöning, U. (1990). Complexity cores and hard problem instances’. In T. Asano, T. Ibaraki, H. Imai, & T. Nishizeki (Eds.), Proceedings of the international symposium on algorithms (SIGAL’90) (pp. 232–240). Berlin: Springer.

  48. Shi, L., Griffiths, T. L., Feldman, N. H., & Sanborn, A. N. (2010). Exemplar models as a mechanism for performing Bayesian inference. Psychonomic Bulletin & Review, 17, 443–464.

    Article  Google Scholar 

  49. Shimony, S. E. (1994). Finding MAPs for belief networks is NP-hard. Artificial Intelligence, 68, 399–410.

    Article  Google Scholar 

  50. S̆íma, J., & Orponen, P. (2003). General-purpose computation with neural networks: A survey of complexity-theoretic results. Neural Computation, 15, 2727–2778.

    Article  Google Scholar 

  51. Tsotsos, J. K. (1990). Analyzing vision at the complexity level. Behavioral and Brain Sciences, 13, 423–469.

    Article  Google Scholar 

  52. van Rooij, I. (2008). The tractable cognition thesis. Cognitive Science, 32, 939–984.

  53. van Rooij, I., & Wareham, T. (2008). Parameterized complexity in cognitive modeling: Foundations, applications and opportunities. Computer Journal, 51, 385–404.

  54. van Rooij, I., & Wareham, T. (2012). Intractability and approximation of optimization theories of cognition. Journal of Mathematical Psychology, 56, 232–247.

    Article  Google Scholar 

  55. van Rooij, I., & Wright, C. D. (2006). The incoherence of heuristically explaining coherence. In R. Sun & N. Miyake (Eds.), Proceedings of 28th annual conference of the cognitive science society (p. 2622). Mahwah: Lawrence Erlbaum Associates.

  56. van Rooij, I., Wright, C. D., & Wareham, T. (2012). Intractability and the use of heuristics in psychological explanations. Synthese, 187, 471–487.

  57. Wimsatt, W. C. (2007). Re-engineering philosophy for limited beings: Piecewise approximations to reality. Cambridge: Harvard University Press.

    Google Scholar 

  58. Zachos, S. (1986). Probabilistic quantifiers, adversaries, & complexity classes: An overview. In A. L. Selman (Ed.), Structure in complexity theory (pp. 383–400). Berlin: Springer.

    Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Iris van Rooij.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

van Rooij, I., Wright, C.D., Kwisthout, J. et al. Rational analysis, intractability, and the prospects of ‘as if’-explanations. Synthese 195, 491–510 (2018). https://doi.org/10.1007/s11229-014-0532-0

Download citation


  • Psychological explanation
  • Rational analysis
  • Computational-level theory
  • Intractability
  • NP-hard
  • Approximation