Skip to main content

When Thinking Never Comes to a Halt: Using Formal Methods in Making Sure Your AI Gets the Job Done Good Enough

  • Chapter
  • First Online:

Part of the book series: Synthese Library ((SYLI,volume 376))

Abstract

The recognition that human minds/brains are finite systems with limited resources for computation has led researchers in cognitive science to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. As also human-level AI in its attempt to recreate intelligence and capacities inspired by the human mind is dealing with finite systems, transferring this thesis and adapting it accordingly may give rise to insights that can help in progressing towards meeting the classical goal of AI in creating machines equipped with capacities rivaling human intelligence. Therefore, we develop the “Tractable Artificial and General Intelligence Thesis” and corresponding formal models usable for guiding the development of cognitive systems and models by applying notions from parameterized complexity theory and hardness of approximation to a general AI framework. In this chapter we provide an overview of our work, putting special emphasis on connections and correspondences to the heuristics framework as recent development within cognitive science and cognitive psychology.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    For an introduction to parameterized complexity theory see, e.g., Flum and Grohe (2006) and Downey and Fellows (1999).

  2. 2.

    The corresponding proofs of the respective results can be found in Robere and Besold (2012). Moreover, in the theorem statements W[1] refers to the class of problems solvable by constant depth combinatorial circuits with at most 1 gate with unbounded fan-in on any path from an input gate to an output gate. In parameterized complexity, the assumption W[1] ≠ FPT can be seen as analogous to P ≠ NP.

  3. 3.

    On the other hand, considering more restrictive notions than APX as, for instance, PTAS (the class of problems for which there exists a polynomial-time approximation scheme, i.e., an algorithm which takes an instance of a optimization problem and a parameter ε > 0 and, in polynomial time, solves the problem within a factor 1 +ε of the optimal solution) does not seem meaningful to us either, as also human satisficing does not approximate optimal solutions up to an arbitrary degree but in experiments normally yields rather clearly defined cut-off points at a certain approximation level.

  4. 4.

    For reasons unclear to the authors this perspective seems to be more widespread and far deeper rooted in AI and cognitive systems research than in (theoretical) cognitive science and cognitive modeling where complexity analysis and formal computational analysis in general by now have gained a solid foothold.

  5. 5.

    Here we presuppose that cognitive capacities can be seen as information processing systems. Still, this seems to be a fairly unproblematic claim, as it simply aligns cognitive processes with computations processing incoming information (e.g., from sensory input) and resulting in a certain output (e.g., a certain behavioral or mental reaction) dependent on the input.

  6. 6.

    Fortunately, this way of conceptualizing a cognitive capacity naturally links to research in artificial cognitive systems. When trying to build a system modeling one or several selected cognitive capacities, we consider a general set of inputs (namely all scenarios in which a manifestation of the cognitive capacity can occur) which we necessarily formally characterize—although maybe only implicitly—in order to make the input parsable for the system, hypothesize a function mapping inputs onto outputs (namely the computations we have the system apply to the inputs) and finally obtain a well-characterized set of outputs (namely all the outputs our system can produce given its programming and the set of inputs).

  7. 7.

    In discussions with researchers working in AI and cognitive systems very occasionally critical feedback relating to the choice of FPT, APX, and FPA as reference classes has been given, as these have (curiously enough) been perceived as too less restrictive. Harshly contrasting with the previously discussed criticism it was argued that human-level cognitive processing should be of linear complexity or less. Still, we do not see a problem here: Neither are we fundamentalist about this precise choice of upper boundaries, nor do we claim that these are the only meaningfully applicable ones. Nonetheless, we decided for them because they can quite straightforwardly be justified and are backed up by close correspondences with other relevant notions from theoretical and practical studies in cognitive science and AI.

  8. 8.

    Of course this also explicitly includes the case in which the considered classes are conceptually not restricted to the rather coarse-grained hierarchy used in “traditional” complexity theory, but if also the significantly finer and more subtle possibilities of class definition and differentiation introduced by parametrized complexity theory and other recent developments are taken into account.

References

  • Besold, T. R. (2013). Formal limits to heuristics in cognitive systems. In Proceedings of the Second Annual Conference on Advances in Cognitive Systems (ACS) 2013, Baltimore.

    Google Scholar 

  • Besold, T. R., & Robere, R. (2013a). A note on tractability and artificial intelligence. In K. U. Kühnberger, S. Rudolph, & P. Wang (Eds.), Artificial General Intelligence – 6th International Conference, AGI 2013, Proceedings, Beijing (Lecture Notes in Computer Science, Vol. 7999, pp. 170–173). Springer.

    Google Scholar 

  • Besold, T. R., & Robere, R. (2013b). When almost is not even close: remarks on the approximability of HDTP. In K. U. Kühnberger, S. Rudolph, & P. Wang (Eds.), Artificial General Intelligence – 6th International Conference, AGI 2013, Proceedings, Beijing (Lecture Notes in Computer Science, Vol. 7999, pp. 11–20). Springer.

    Google Scholar 

  • Blokpoel, M., Kwisthout, J., Wareham, T., Haselager, P., Toni, I., & van Rooij, I. (2011). The computational costs of recipient design and intention recognition in communication. In Proceedings of the 33rd Annual Meeting of the Cognitive Science Society, Boston (pp. 465–470)

    Google Scholar 

  • Cai, L., & Chen, J. (1997). On fixed-parameter tractability and approximability of {NP} optimization problems. Journal of Computer and System Sciences, 54(3), 465–474. doi:http://dx.doi.org/10.1006/jcss.1997.1490.

    Article  Google Scholar 

  • Cai, L., & Huang, X. (2006). Fixed-parameter approximation: Conceptual framework and approximability results. In H. Bodlaender & M. Langston (Eds.), Parameterized and exact computation (Lecture Notes in Computer Science, Vol. 4169, pp. 96–108). Berlin/Heidelberg: Springer. doi:10.1007/11847250_9.

    Chapter  Google Scholar 

  • Chapman, D. (1987). Planning for conjunctive goals. Artificial Intelligence, 32(3), 333–377.

    Article  Google Scholar 

  • Cooper, G. (1990). The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence, 42, 393–405.

    Article  Google Scholar 

  • Cummins, R. (2000). “How does it work?” vs. “What are the laws?” two conceptions of psychological explanation. In F. Keil & R. Wilson (Eds.), Explanation and cognition (pp. 117–145). Cambridge: MIT.

    Google Scholar 

  • Czerlinski, J., Goldstein, D., & Gigerenzer, G. (1999). How good are simple heuristics? In G. Gigerenzer, P. Todd, & the ABC Group (Eds.), Simple heuristics that make us smart. New York: Oxford University Press.

    Google Scholar 

  • Downey, R. G., & Fellows, M. R. (1999). Parameterized complexity. New York: Springer.

    Book  Google Scholar 

  • Downey, R. G., Fellows, M. R., & Stege, U. (1997). Parameterized complexity: A framework for systematically confronting computational intractability. In Contemporary Trends in Discrete Mathematics: From DIMACS and DIMATIA to the Future. Providence: AMS.

    Google Scholar 

  • Evans, T. G. (1964). A heuristic program to solve geometric-analogy problems. In Proceedings of the April 21–23, 1964, Spring Joint Computer Conference AFIPS ’64 (Spring) (pp. 327–338). New York: ACM. doi:http://doi.acm.org/10.1145/1464122.1464156

  • Falkenhainer, B., Forbus, K., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial Intelligence, 41(1), 1–63. doi:10.1016/0004-3702(89)90077-5.

    Article  Google Scholar 

  • Flum, J., & Grohe, M. (2006). Parameterized complexity theory. Berlin: Springer.

    Google Scholar 

  • Frixione, M. (2001). Tractable competence. Minds and Machines, 11, 379–397.

    Article  Google Scholar 

  • Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155–170.

    Article  Google Scholar 

  • Gentner, D., & Forbus, K. (1991). MAC/FAC: A model of similarity-based retrieval. Cognitive Science, 19, 141–205.

    Google Scholar 

  • Gigerenzer, G., Hertwig, R., & Pachur, T. (Eds.). (2011). Heuristics: The foundation of adaptive behavior. New York: Oxford University Press.

    Google Scholar 

  • Gottlob, G., & Szeider, S. (2008). Fixed-parameter algorithms for artificial intelligence, constraint satisfaction and database problems. The Computer Journal, 51(3), 303–325. doi:10.1093/comjnl/bxm056.

    Article  Google Scholar 

  • Hofstadter, D. (2001). Epilogue: Analogy as the core of cognition. In D. Gentner, K. Holyoak, & B. Kokinov (Eds.), The analogical mind: Perspectives from cognitive science (pp. 499–538). Cambridge: MIT.

    Google Scholar 

  • Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge/New York: Cambridge University Press.

    Book  Google Scholar 

  • Krumnack, U., Schwering, A., Gust, H., & Kühnberger, K. (2007). Restricted higher-order anti-unification for analogy making. In Twentieth Australian Joint Conference on Artificial Intelligence. Berlin: Springer.

    Google Scholar 

  • Kwisthout, J., & van Rooij, I. (2012). Bridging the gap between theory and practice of approximate bayesian inference. In Proceedings of the 11th International Conference on Cognitive Modeling, Berlin (pp. 199–204).

    Google Scholar 

  • Levesque, H. (1988). Logic and the complexity of reasoning. Journal of Philosophical Logic, 17, 355–389.

    Article  Google Scholar 

  • Marr, D. (1982). Vision: A computational investigation into the human representation and processing visual information. San Francisco: Freeman.

    Google Scholar 

  • Nebel, B. (1996). Artificial intelligence: A computational perspective. In G. Brewka (Ed.), Principles of knowledge representation (pp. 237–266). Stanford: CSLI Publications.

    Google Scholar 

  • Pylyshyn, Z. (1980). Computation and cognition: Issues in the foundation of cognitive science. The Behavioral and Brain Sciences, 3, 111–132.

    Article  Google Scholar 

  • Reitman, W. R., Grove, R. B., & Shoup, R. G. (1964). Argus: An information-processing model of thinking. Behavioral Science, 9(3), 270–281. doi:10.1002/bs.3830090312.

    Article  Google Scholar 

  • Robere, R., & Besold, T. R. (2012). Complex analogies: Remarks on the complexity of HDTP. In Twentyfifth Australasian Joint Conference on Artificial Intelligence (Lecture Notes in Computer Science, Vol. 7691, pp. 530–542). Berlin/New York: Springer.

    Google Scholar 

  • Schwering, A., Krumnack, U., Kühnberger, K. U., & Gust, H. (2009a). Syntactic principles of heuristic-driven theory projection. Journal of Cognitive Systems Research, 10(3), 251–269.

    Google Scholar 

  • Schwering, A., Kühnberger, K. U., & Kokinov, B. (2009b). Analogies: Integrating multiple cognitive abilities – guest editorial. Journal of Cognitive Systems Research 10(3), 175–177.

    Google Scholar 

  • Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129–138. doi:10.1037/h0042769.

    Article  Google Scholar 

  • Simon, H. A. (1957). Models of man: Social and rational. New York: Wiley.

    Google Scholar 

  • Turing, A. (1969). Intelligent machinery. In B. Meltzer & D. Michie (Eds.), Machine intelligence (Vol. 5, pp. 3–23). Edinburgh: Edinburgh University Press.

    Google Scholar 

  • van Rooij, I. (2008). The tractable cognition thesis. Cognitive Science, 32, 939–984.

    Article  Google Scholar 

  • Wareham, T., Kwisthout, J., Haselager, P., & van Rooij, I. (2011). Ignorance is bliss: A complexity perspective on adapting reactive architectures. In Proceedings of the First IEEE Conference on Development and Learning and on Epigenetic Robotics, Frankfurt am Main (pp. 465–470).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tarek R. Besold .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Besold, T.R., Robere, R. (2016). When Thinking Never Comes to a Halt: Using Formal Methods in Making Sure Your AI Gets the Job Done Good Enough. In: Müller, V.C. (eds) Fundamental Issues of Artificial Intelligence. Synthese Library, vol 376. Springer, Cham. https://doi.org/10.1007/978-3-319-26485-1_4

Download citation

Publish with us

Policies and ethics