Toward a Methodology for AI Architecture Evaluation: Comparing Soar and CLIPS

  • Scott A. Wallace
  • John E. Laird
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1757)


We propose a methodology that can be used to compare and evaluate Artificial Intelligence architectures and is motivated by fundamental properties required by general intelligent systems. We examine an initial application of this method used to compare Soar and CLIPS in two simple domains. Results gathered from our tests indicate both qualitative and quantitative differences in these architectures and are used to explore how aspects of the architectures may affect the agent design process and the performance of agents implemented within each architecture.


Short Term Memory Command Line Interface Simple Domain Control Knowledge Hierarchical Planning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    CLIPS Reference Manual: Version 6.05Google Scholar
  2. 2.
    Doorenbos, R.B.: Production Matching for Large Learning Systems. PhD thesis, Carnegie Mellon University (1995)Google Scholar
  3. 3.
    Forgy, C.L.: On the Efficient Implementation of Production Systems. PhD thesis, Carnegie Mellon University (1979)Google Scholar
  4. 4.
    Gat, E.: Integrating planning and reacting in a heterogeneous asynchronous architecture for mobile robots. In: Proceedings Tenth National Conference on Artificial Intelligence, pp. 809–815. AAAI Press, Menlo Park (1992)Google Scholar
  5. 5.
    Gevarter, W.B.: The nature and evaluation of commercial expert system building tools. Computer 20(5), 24–41 (1987)CrossRefGoogle Scholar
  6. 6.
    Ingrand, F.F., Georgeff, M.P., Rao, A.S.: An architecture for real-time reasoning and system control. IEEE Expert 7(6), 34–44 (1992)CrossRefGoogle Scholar
  7. 7.
    Laird, J.E., Newell, A., Rosenbloom, P.S.: Soar: An architecture for general intelligence. Artificial Intelligence (1987)Google Scholar
  8. 8.
    Lee, J., Yoo, S.I.: Reactive-system approaches to agent architectures. In: Jennings, N.R. (ed.) ATAL 1999. LNCS (LNAI), vol. 1757, Springer, Heidelberg (2000)CrossRefGoogle Scholar
  9. 9.
    Mettrey, W.A.: A comparative evaluation of expert system tools. Computer 24(2), 19–31 (1991)CrossRefGoogle Scholar
  10. 10.
    Newell, A.: Unified Theories of Cognition. Harvard University Press, Cambridge (1990)Google Scholar
  11. 11.
    Plant, R.T., Salinas, J.P.: Expert system shell benchmarks: The missing comparison factor. Information & Management 27, 89–101 (1994)CrossRefGoogle Scholar
  12. 12.
    Pollack, M.E., Ringuette, M.: Introducing the tileworld: Experimentally evaluating agent architectures. In: Proceedings of the Eighth National Conference on Artificial Intelligence, vol. 1, pp. 183–189. MIT Press, Cambridge (1990)Google Scholar
  13. 13.
    Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, pp. 31–52. Prentice-Hall, Upper Saddle River (1995)zbMATHGoogle Scholar
  14. 14.
    Schreiber, A.T., Birmingham, W.P.: Editorial: the sisyphus-vt initiative. International Journal of Human-Computer Studies 44(3), 275–280 (1996)CrossRefGoogle Scholar
  15. 15.
    Stylianou, A.C., Smith, R.D., Madey, G.R.: An empirical model for the evaluation and selection of expert system shells. Expert Systems With Applications 8(1), 143–155 (1995)CrossRefGoogle Scholar
  16. 16.
    Tate, A., Drabble, B., Kirby, R.: O–plan2: An architecture for command, planning and control. In: Fox, M., Zweben, M. (eds.) Intelligent Scheduling. Morgan Kaufmann, San Francisco (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Scott A. Wallace
    • 1
  • John E. Laird
    • 1
  1. 1.Artificial Intelligence LaboratoryUniversity of MichiganAnn ArborUSA

Personalised recommendations