Skip to main content

Rationality and Intelligence: A Brief Update

  • Chapter
  • First Online:
Fundamental Issues of Artificial Intelligence

Part of the book series: Synthese Library ((SYLI,volume 376))

Abstract

The long-term goal of AI is the creation and understanding of intelligence. This requires a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. The concept of rational agency has long been considered a leading candidate to fulfill this role. This paper, which updates a much earlier version (Russell, Artif Intell 94:57–77, 1997), reviews the sequence of conceptual shifts leading to a different candidate, bounded optimality, that is closer to our informal conception of intelligence and reduces the gap between theory and practice. Some promising recent developments are also described.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A similar observation was made by Horvitz and Breese (1990) for cases where the object level is so restricted that the metalevel decision problem can be solved in constant time.

References

  • Agre, P. E., & Chapman, D. (1987). Pengi: An implementation of a theory of activity. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence (IJCAI-87), Milan (pp. 268–272). Morgan Kaufmann.

    Google Scholar 

  • Andre, D., & Russell, S. J. (2002) State abstraction for programmable reinforcement learning agents. In Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI-02), Edmonton (pp. 119–125). AAAI Press.

    Google Scholar 

  • Bellman, R. E. (1957). Dynamic programming. Princeton: Princeton University Press.

    Google Scholar 

  • Berry, D. A., & Fristedt, B. (1985). Bandit problems: Sequential allocation of experiments. London: Chapman and Hall.

    Book  Google Scholar 

  • Breese, J. S., & Fehling, M. R. (1990). Control of problem-solving: Principles and architecture. In R. D. Shachter, T. Levitt, L. Kanal, & J. Lemmer (Eds.), Uncertainty in artificial intelligence 4. Amsterdam/London/New York: Elsevier/North-Holland.

    Google Scholar 

  • Brooks, R. A. (1989). Engineering approach to building complete, intelligent beings. Proceedings of the SPIE—The International Society for Optical Engineering, 1002, 618–625.

    Google Scholar 

  • Carnap, R. (1950). Logical foundations of probability. Chicago: University of Chicago Press.

    Google Scholar 

  • Cherniak, C. (1986). Minimal rationality. Cambridge: MIT.

    Google Scholar 

  • Dean, T., & Boddy, M. (1988) An analysis of time-dependent planning. In Proceedings of the Seventh National Conference on Artificial Intelligence (AAAI-88), St. Paul (pp. 49–54). Morgan Kaufmann.

    Google Scholar 

  • Dean, T., Aloimonos, J., & Allen, J. F. (1995). Artificial intelligence: Theory and practice. Redwood City: Benjamin/Cummings.

    Google Scholar 

  • Dennett, D. C. (1988). The moral first aid manual. In S. McMurrin (Ed.), Tanner lectures on human values (Vol. 7, pp. 121–147). University of Utah Press and Cambridge University Press.

    Google Scholar 

  • Doyle, J., & Patil, R. (1991). Two theses of knowledge representation: Language restrictions, taxonomic classification, and the utility of representation services. Artificial Intelligence, 48(3), 261–297

    Article  Google Scholar 

  • Good, I. J. (1971) Twenty-seven principles of rationality. In V. P. Godambe & D. A. Sprott (Eds.), Foundations of statistical inference (pp. 108–141). Toronto: Holt, Rinehart, Winston.

    Google Scholar 

  • Goodman, N. D., Mansinghka, V. K., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2008). Church: A language for generative models. In Proceedings of UAI-08, Helsinki (pp. 220–229).

    Google Scholar 

  • Harman, G. H. (1983). Change in view: Principles of reasoning. Cambridge: MIT.

    Google Scholar 

  • Hay, N., Russell, S., Shimony, S. E., & Tolpin, D. (2012). Selecting computations: Theory and applications. In Proceedings of UAI-12, Catalina Island.

    Google Scholar 

  • Horvitz, E. J. (1987). Problem-solving design: Reasoning about computational value, trade-offs, and resources. In Proceedings of the Second Annual NASA Research Forum, NASA Ames Research Center, Moffett Field, CA (pp. 26–43).

    Google Scholar 

  • Horvitz, E. J. (1989). Reasoning about beliefs and actions under computational resource constraints. In L. N. Kanal, T. S. Levitt, & J. F. Lemmer (Eds.), Uncertainty in artificial intelligence 3 (pp. 301–324). Amsterdam/London/New York: Elsevier/North-Holland.

    Google Scholar 

  • Horvitz, E. J., & Breese, J. S. (1990). Ideal partition of resources for metareasoning (Technical report KSL-90-26), Knowledge Systems Laboratory, Stanford University, Stanford.

    Google Scholar 

  • Howard, R. A. (1966). Information value theory. IEEE Transactions on Systems Science and Cybernetics, SSC-2, 22–26.

    Article  Google Scholar 

  • Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Berlin/New York: Springer.

    Book  Google Scholar 

  • Kearns, M., Schapire, R. E., & Sellie, L. (1992). Toward efficient agnostic learning. In Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory (COLT-92), Pittsburgh. ACM.

    Google Scholar 

  • Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value tradeoffs. New York: Wiley.

    Google Scholar 

  • Kocsis, L., & Szepesvari, C. (2006). Bandit-based Monte-Carlo planning. In Proceedings of ECML-06, Berlin.

    Google Scholar 

  • Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems in Information Transmission, 1(1), 1–7.

    Google Scholar 

  • Koopmans, T. C. (1972). Representation of preference orderings over time. In C.B. McGuire & R. Radner (Eds.), Decision and organization. Amsterdam/London/New York: Elsevier/North-Holland.

    Google Scholar 

  • Kumar, P. R., & Varaiya, P. (1986). Stochastic systems: Estimation, identification, and adaptive control. Upper Saddle River: Prentice-Hall.

    Google Scholar 

  • Laird, J. E., Rosenbloom, P. S., & Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46.

    Google Scholar 

  • Levesque, H. J. (1986). Making believers out of computers. Artificial Intelligence, 30(1), 81–108.

    Article  Google Scholar 

  • Livnat, A., & Pippenger, N. (2006). An optimal brain can be composed of conflicting agents. Proceedings of the National Academy of Sciences of the United States of America 103(9), 3198–3202.

    Article  Google Scholar 

  • Marthi, B., Russell, S., Latham, D., & Guestrin, C. (2005). Concurrent hierarchical reinforcement learning. In Proceedings of IJCAI-05, Edinburgh.

    Google Scholar 

  • Marthi, B., Russell, S. J., & Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online algorithms. In Proceedings of ICAPS-08, Sydney.

    Google Scholar 

  • Matheson, J. E. (1968). The economic value of analysis and computation. IEEE Transactions on Systems Science and Cybernetics, SSC-4(3), 325–332.

    Article  Google Scholar 

  • Megiddo, N., & Wigderson, A. (1986). On play by means of computing machines. In J. Y. Halpern (Ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference (TARK-86), IBM and AAAI, Monterey (pp. 259–274). Morgan Kaufmann.

    Google Scholar 

  • Milch, B., Marthi, B., Sontag, D., Russell, S. J., Ong, D., & Kolobov, A. (2005). BLOG: Probabilistic models with unknown objects. In Proceedings of IJCAI-05, Edinburgh.

    Google Scholar 

  • Newell, A. (1982). The knowledge level. Artificial Intelligence, 18(1), 82–127.

    Article  Google Scholar 

  • Nilsson, N. J. (1991). Logic and artificial intelligence. Artificial Intelligence, 47(1–3), 31–56

    Article  Google Scholar 

  • Papadimitriou, C. H., & Yannakakis, M. (1994). On complexity as bounded rationality. In Symposium on Theory of Computation (STOC-94), Montreal.

    Google Scholar 

  • Parr, R., & Russell, S. J. (1998). Reinforcement learning with hierarchies of machines. In M. I. Jordan, M. Kearns, & S. A. Solla (Eds.), Advances in neural information processing systems 10. Cambridge: MIT.

    Google Scholar 

  • Pfeffer, A. (2001). IBAL: A probabilistic rational programming language. In Proceedings of IJCAI-01, Seattle (pp. 733–740).

    Google Scholar 

  • Russell, S. J. (1997). Rationality and intelligence. Artificial Intelligence, 94, 57–77.

    Article  Google Scholar 

  • Russell, S. J. (1998). Learning agents for uncertain environments (extended abstract). In Proceedings of the Eleventh Annual ACM Workshop on Computational Learning Theory (COLT-98), Madison (pp. 101–103). ACM.

    Google Scholar 

  • Russell, S. J., & Norvig, P. (1995). Artificial intelligence: A modern approach. Upper Saddle River: Prentice-Hall.

    Google Scholar 

  • Russell, S. J., & Subramanian, D. (1995). Provably bounded-optimal agents. Journal of Artificial Intelligence Research, 3, 575–609.

    Article  Google Scholar 

  • Russell, S. J., & Wefald, E. H. (1989). On optimal game-tree search using rational meta-reasoning. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89), Detroit (pp. 334–340). Morgan Kaufmann.

    Google Scholar 

  • Russell, S. J., & Wefald, E. H. (1991a). Do the right thing: Studies in limited rationality. Cambridge: MIT.

    Google Scholar 

  • Russell, S. J., & Wefald, E. H. (1991b). Principles of metareasoning. Artificial Intelligence 49(1–3), 361–395.

    Google Scholar 

  • Russell, S. J., & Zilberstein, S. (1991). Composing real-time systems. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (IJCAI-91), Sydney. Morgan Kaufmann.

    Google Scholar 

  • Shoham, Y., & Leyton-Brown, K. (2009). Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge/New York: Cambridge University Press.

    Google Scholar 

  • Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69, 99–118.

    Article  Google Scholar 

  • Simon, H. A. (1958). Rational choice and the structure of the environment. In Models of bounded rationality (Vol. 2). Cambridge: MIT.

    Google Scholar 

  • Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7, 1–22, 224–254.

    Article  Google Scholar 

  • Srivastava, S., Russell, S., Ruan, P., & Cheng, X. (2014). First-order open-universe POMDPs. In Proceedings of UAI-14, Quebec City.

    Google Scholar 

  • Sutton, R., Precup, D., & Singh, S. P. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112, 181–211.

    Article  Google Scholar 

  • Tadepalli, P. (1991). A formalization of explanation-based macro-operator learning. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (IJCAI-91), Sydney (pp. 616–622). Morgan Kaufmann.

    Google Scholar 

  • Tennenholtz, M. (2004). Program equilibrium. Games and Economic Behavior, 49(2), 363–373.

    Article  Google Scholar 

  • Vapnik, V. (2000). The nature of statistical learning theory. Berlin/New York: Springer.

    Book  Google Scholar 

  • von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior (1st ed.). Princeton: Princeton University Press.

    Google Scholar 

  • Wellman, M. P. (1994). A market-oriented programming environment and its application to distributed multicommodity flow problems. Journal of Artificial Intelligence Research, 1(1), 1–23.

    Google Scholar 

  • Wellman, M. P., & Doyle, J. (1991). Preferential semantics for goals. In Proceedings of the Ninth National Conference on Artificial Intelligence (AAAI-91), Anaheim (Vol. 2, pp. 698–703). AAAI Press.

    Google Scholar 

  • Zilberstein, S., & Russell, S. J. (1996). Optimal composition of real-time systems. Artificial Intelligence 83, 181–213.

    Article  Google Scholar 

Download references

Acknowledgements

An earlier version of this paper appeared in the journal Artificial Intelligence, published by Elsevier. That paper drew on previous work with Eric Wefald and Devika Subramanian. More recent results were obtained with Nick Hay. Thanks also to Michael Wellman, Michael Fehling, Michael Genesereth, Russ Greiner, Eric Horvitz, Henry Kautz, Daphne Koller, Bart Selman, and Daishi Harada for many stimulating discussions topic of bounded rationality. The research was supported by NSF grants IRI-8903146, IRI-9211512 and IRI-9058427, and by a UK SERC Visiting Fellowship. The author is supported by the Chaire Blaise Pascal, funded by the l’État et la Région Île de France and administered by the Fondation de l’École Normale Supérieure.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stuart Russell .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Russell, S. (2016). Rationality and Intelligence: A Brief Update. In: MĂĽller, V.C. (eds) Fundamental Issues of Artificial Intelligence. Synthese Library, vol 376. Springer, Cham. https://doi.org/10.1007/978-3-319-26485-1_2

Download citation

Publish with us

Policies and ethics