A Refinement Framework for Autonomous Agents

  • Qin Li
  • Graeme Smith
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8195)

Abstract

An autonomous agent is one that is not only directed by its environment, but is also driven by internal motivation to achieve certain goals. The popular Belief-Desire-Intention (BDI) design paradigm allows such agents to adapt to environmental changes by calculating a new execution path to their current goal, or when necessary turning to another goal. In this paper we present an approach to modelling autonomous agents using an extension to Object-Z. This extension supports both data and action refinement, and includes the use of LTL formulas to describe an agent’s desire as a sequence of prioritised goals. It turns out, however, that the introduction of desire-driven behaviour is not monotonic with respect to refinement. We therefore introduce an additional refinement proof obligation to enable the use of simulation rules when checking refinement.

Keywords

Autonomous agents BDI agents Refinement Object-Z Temporal logic 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wooldridge, M.: An Introduction to MultiAgent Systems, 2nd edn. Wiley (2009)Google Scholar
  2. 2.
    Rao, A.S., Georgeff, M.P.: BDI agents: From theory to practice. In: 1st International Conference of Multi-agent Systems (ICMAS 1995), pp. 312–319. MIT Press (1995)Google Scholar
  3. 3.
    Wooldridge, M., Jennings, N.R.: Intelligent agents: Theory and practice. Knowledge Engineering Review 10, 115–152 (1995)CrossRefGoogle Scholar
  4. 4.
    Smith, G.: The Object-Z Specification Language. Kluwer Academic Publishers (2000)Google Scholar
  5. 5.
    Emerson, E.: Temporal and modal logic. In: van Leeuwen, J. (ed.) Handbook of Theoretical Computer Science, vol. B, pp. 996–1072. Elsevier (1990)Google Scholar
  6. 6.
    Abrial, J.R.: Modelling in Event-B. Cambridge University Press (2010)Google Scholar
  7. 7.
    Derrick, J., Boiten, E.: Refinement in Z and Object-Z, Foundations and Advanced Applications. Springer (2001)Google Scholar
  8. 8.
    Alur, R., Henzinger, T.A., Kupferman, O., Vardi, M.Y.: Alternating refinement relations. In: Sangiorgi, D., de Simone, R. (eds.) CONCUR 1998. LNCS, vol. 1466, pp. 163–178. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  9. 9.
    Atkinson, K., Bench-Capon, T.: Practical reasoning as presumptive argumentation using action based alternating transition systems. Artif. Intell. 171, 855–874 (2007)MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Zhu, H.: Formal specification of agent behaviour through environment scenarios. In: Rash, J.L., Rouff, C.A., Truszkowski, W., Gordon, D.F., Hinchey, M.G. (eds.) FAABS 2000. LNCS (LNAI), vol. 1871, pp. 263–277. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  11. 11.
    Aştefănoaei, L., de Boer, F.: The refinement of multi-agent systems. In: Dastani, M., Hindriks, K., Meyer, J.J. (eds.) Specification and Verification of Multi-agent Systems, pp. 35–65. Springer (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Qin Li
    • 1
  • Graeme Smith
    • 1
  1. 1.School of Information Technology and Electrical EngineeringThe University of QueenslandAustralia

Personalised recommendations