Skip to main content

Quantitative Verification and Strategy Synthesis for BDI Agents

  • Conference paper
  • First Online:
NASA Formal Methods (NFM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13903))

Included in the following conference series:

Abstract

Belief-Desire-Intention (BDI) agents feature probabilistic outcomes, e.g. the chance an agent tries but fails to open a door, and non-deterministic choices: what plan/intention to execute next? We want to reason about agents under both probabilities and non-determinism to determine, for example, probabilities of mission success and the strategies used to maximise this. We define a Markov Decision Process describing the semantics of the Conceptual Agent Notation (Can) agent language that supports non-deterministic event, plan, and intention selection, as well as probabilistic action outcomes. The model is derived through an encoding to Milner’s Bigraphs and executed using the BigraphER tool. We show, using probabilistic model checkers PRISM and Storm, how to reason about agents including: probabilistic and reward-based properties, strategy synthesis, and multi-objective analysis. This analysis provides verification and optimisation of BDI agent design and implementation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Any logic is allowed providing entailment is supported.

  2. 2.

    We determine this by symbolically executing the program as we convert to an MDP.

  3. 3.

    PRISM currently does not support reward import.

  4. 4.

    This probability is never 1 as there is always a chance bags fail regardless of type.

References

  1. Bratman, M.: Intention, Plans, and Practical Reason. Harvard University Press (1987)

    Google Scholar 

  2. Rao, A.S.: AgentSpeak(L): BDI agents speak out in a logical computable language. In: Van de Velde, W., Perram, J.W. (eds.) MAAMAW 1996. LNCS, vol. 1038, pp. 42–55. Springer, Heidelberg (1996). https://doi.org/10.1007/BFb0031845

    Chapter  Google Scholar 

  3. Winikoff, M., Padgham, L., Harland, J., Thangarajah, J.: Declarative and procedural goals in intelligent agent systems. In: The 8th International Conference on Principles of Knowledge Representation and Reasoning. Morgan Kaufman (2002)

    Google Scholar 

  4. Hindriks, K.V., De Boer, F.S., Van der Hoek, W., Meyer, J.-J.Ch.: Agent programming in 3APL. Auton. Agents Multi-Agent Syst. 2(4), 357–401 (1999)

    Google Scholar 

  5. Dastani, M.: 2APL: a practical agent programming language. Auton. Agent. Multi-Agent Syst. 16(3), 214–248 (2008)

    Article  Google Scholar 

  6. Winikoff, M.: Jack™ intelligent agents: an industrial strength platform. In: Bordini, R.H., Dastani, M., Dix, J., El Fallah Seghrouchni, A. (eds.) Multi-Agent Programming. MSASSO, vol. 15, pp. 175–193. Springer, Boston (2005). https://doi.org/10.1007/0-387-26350-0_7

    Chapter  Google Scholar 

  7. Bordini, R.H., HüJomi, J.F., Wooldridge, M.: Programming Multi-agent Systems in AgentSpeak Using Jason, vol. 8. Wiley, Hoboken (2007)

    Book  MATH  Google Scholar 

  8. Pokahr, A., Braubach, L., Jander, K.: The Jadex project: programming model. In: Ganzha, M., Jain, L. (eds.) Multiagent Systems and Applications, pp. 21–53. Springer, Cham (2013). https://doi.org/10.1007/978-3-642-33323-1_2

    Chapter  Google Scholar 

  9. Wooldridge, M.: An Introduction to Multiagent Systems. Wiley, Hoboken (2009)

    Google Scholar 

  10. Luckcuck, M., Farrell, M., Dennis, L.A., Dixon, C., Fisher, M.: Formal specification and verification of autonomous robotic systems: a survey. ACM Comput. Surv. (CSUR) 52(5), 1–41 (2019)

    Article  Google Scholar 

  11. Dennis, L.A., Fisher, M., Webster, M.P., Bordini, R.H.: Model checking agent programming languages. Autom. Softw. Eng. 19(1), 5–63 (2012)

    Article  Google Scholar 

  12. Jensen, A.B.: Machine-checked verification of cognitive agents. In: Proceedings of the 14th International Conference on Agents and Artificial Intelligence, pp. 245–256 (2022)

    Google Scholar 

  13. Chen, H.: Applications of cyber-physical system: a literature review. J. Ind. Integr. Manag. 2(03), 1750012 (2017)

    Article  Google Scholar 

  14. Archibald, B., Calder, M., Sevegnani, M., Xu, M.: Probabilistic BDI agents: actions, plans, and intentions. In: Calinescu, R., Păsăreanu, C.S. (eds.) SEFM 2021. LNCS, vol. 13085, pp. 262–281. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92124-8_15

    Chapter  Google Scholar 

  15. Ghallab, M., Nau, D., Traverso, P.: Automated Planning: Theory and Practice. Elsevier (2004)

    Google Scholar 

  16. Geffner, H., Bonet, B.: A concise Introduction to Models and Methods for Automated Planning. Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 8, no. 1, pp. 1–141 (2013)

    Google Scholar 

  17. Abdeddaı, Y., Asarin, E., Maler, O., et al.: Scheduling with timed automata. Theor. Comput. Sci. 354(2), 272–300 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kwiatkowska, M., Parker, D.: Automated verification and strategy synthesis for probabilistic systems. In: Van Hung, D., Ogawa, M. (eds.) ATVA 2013. LNCS, vol. 8172, pp. 5–22. Springer, Cham (2013). https://doi.org/10.1007/978-3-319-02444-8_2

    Chapter  MATH  Google Scholar 

  19. Sardina, S., Padgham, L.: A BDI agent programming language with failure handling, declarative goals, and planning. Auton. Agents Multi-Agent Syst. 23, 18–70 (2011)

    Article  Google Scholar 

  20. Milner, R.: The Space and Motion of Communicating Agents. Cambridge University Press, Cambridge (2009)

    Book  MATH  Google Scholar 

  21. Archibald, B., Calder, M., Sevegnani, M.: Probabilistic bigraphs. Formal Aspects Comput. 34(2), 1–27 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  22. Bellman, R.: A Markovian decision process. J. Math. Mech. 679–684 (1957)

    Google Scholar 

  23. Sevegnani, M., Calder, M.: BigraphER: rewriting and analysis engine for bigraphs. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9780, pp. 494–501. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41540-6_27

    Chapter  Google Scholar 

  24. Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_47

    Chapter  Google Scholar 

  25. Hensel, C., Junges, S., Katoen, J.P., Quatmann, T., Volk, M.: The probabilistic model checker storm. Int. J. Softw. Tools Technol. Transfer 1–22 (2021)

    Google Scholar 

  26. Di Pierro, A., Wiklicky, H.: An operational semantics for probabilistic concurrent constraint programming. In: The 1998 International Conference on Computer Languages, pp. 174–183. IEEE (1998)

    Google Scholar 

  27. Xu, M.: Bigraph models of smart manufacturing and rover example in can (2022). https://doi.org/10.5281/zenodo.7441574

  28. Hansson, H., Jonsson, B.: A logic for reasoning about time and reliability. Formal Aspects Comput. 6(5), 512–535 (1994)

    Article  MATH  Google Scholar 

  29. Forejt, V., Kwiatkowska, M., Parker, D.: Pareto curves for probabilistic model checking. In: Chakraborty, S., Mukund, M. (eds.) ATVA 2012. LNCS, pp. 317–332. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33386-6_25

    Chapter  MATH  Google Scholar 

  30. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  31. Hartmanns, A., Klauck, M.: The modest state of learning, sampling, and verifying strategies. In: Margaria, T., Steffen, B. (eds.) ISoLA 2022. LNCS, vol. 13703, pp. 406–432. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-19759-8_25

    Chapter  Google Scholar 

  32. Padgham, L., Singh, D.: Situational preferences for BDI plans. In: The 2013 International Conference on Autonomous Agents and Multi-agent Systems, pp. 1013–1020 (2013)

    Google Scholar 

  33. Meneguzzi, F., De Silva, L.: Planning in BDI agents: a survey of the integration of planning algorithms and agent reasoning. Knowl. Eng. Rev. 30(1), 1–44 (2015)

    Article  Google Scholar 

  34. De Silva, L., Meneguzzi, F.R., Logan, B.: BDI agent architectures: a survey. In: The 29th International Joint Conference on Artificial Intelligence (2020)

    Google Scholar 

  35. Bordini, R.H., Bazzan, A.L.C., Jannone, R.D.O., Basso, D.M., Vicari, R.M., Lesser, V.R.: AgentSpeak (XL) efficient intention selection in BDI agents via decision-theoretic task scheduling. In: The First International Joint Conference on Autonomous Agents and Multiagent Systems: Part 3, pp. 1294–1302 (2002)

    Google Scholar 

  36. Xu, M., McAreavey, K., Bauters, K., Liu, W.: Intention interleaving via classical replanning. In: 2019 IEEE 31st International Conference on Tools with Artificial Intelligence, pp. 85–92. IEEE (2019)

    Google Scholar 

  37. Logan, B., Thangarajah, J., Yorke-Smith, N.: Progressing intention progression: a call for a goal-plan tree contest. In: AAMAS, pp. 768–772 (2017)

    Google Scholar 

  38. Intention progression competition. https://www.intentionprogression.org/

  39. Xu, M., Bauters, K., McAreavey, K., Liu, W.: A formal approach to embedding first-principles planning in BDI agent systems. In: Ciucci, D., Pasi, G., Vantaggi, B. (eds.) SUM 2018. LNCS (LNAI), vol. 11142, pp. 333–347. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00461-3_23

    Chapter  Google Scholar 

Download references

Acknowledgements

This work is supported by the EPSRC, under PETRAS SRF grant MAGIC (EP/S035362/1), S4: Science of Sensor Systems Software (EP/N007565/1), and an Amazon Research Award on Automated Reasoning.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mengwei Xu .

Editor information

Editors and Affiliations

A Appendix

A Appendix

The language used in the plan-body in Can is defined by the grammar:

\( \pm b \ | \ act \ | \ e \ | \ P _{1}; P _{2} \ | \ P _{1} \triangleright P _{2} \ | \ P _{1}\parallel P _{2} \ | \ goal (\varphi _{s}, P , \varphi _{f}) \)

where \(\pm b\) stands for belief addition and deletion, act a primitive agent action, and e is a sub-event (i.e. internal event). Actions act take the form \(act = \psi \leftarrow \langle \phi ^{+}, \phi ^{-} \rangle \), where \( \psi \) is the pre-condition, and \(\phi ^{+} \) and \( \phi ^{-} \) are the addition and deletion sets (resp.) of belief atoms, i.e. a belief base \( \mathcal {B} \) is revised to be \( (\mathcal {B} \setminus \phi ^{-}) \cup \phi ^{+}\) when the action executes. To execute a sub-event, a plan (corresponding to that event) is selected and the plan-body added in place of the event. In this way we allow plans to be nested (similar to sub-routine calls in other languages). In addition, there are composite programs \( P _{1}; P _{2} \) for sequence, \( P _{1} \rhd P _{2} \) that executes \( P _{2}\) in the case that \( P _{1} \) fails, and \( P _{1}\parallel P _{2} \) for interleaved concurrency. Finally, a declarative goal program \( goal (\varphi _{s}, P , \varphi _{f}) \) expresses that the declarative goal \( \varphi _{s} \) should be achieved through program P, failing if \( \varphi _{f} \) becomes true, and retrying as long as neither \( \varphi _{s} \) nor \( \varphi _{f} \) is true (see in [19] for details).

Figure 7 gives the complete set of semantic rules for evolving an intention. For example, act handles the execution of an action, when the pre-condition \(\psi \) is met, resulting in a belief state update. Rule event replaces an event with the set of relevant plans, while rule select chooses an applicable plan from a set of relevant plans while retaining un-selected plans as backups. With these backup plans, the rules for failure recovery \(\rhd _{;}\), \(\rhd _{\top }\), and \(\rhd _{\bot }\) enable new plans to be selected if the current plan fails (e.g. due to environment changes). Rules ;  and \(;_{\top }\) allow executing plan-bodies in sequence, while rules \(\Vert _{1} \), \(\Vert _{2} \), and \(\Vert _{\top } \) specify how to execute (interleaved) concurrent programs (within an intention). Rules \(G_s\) and \(G_f\) deal with declarative goals when either the success condition \(\varphi _s\) or the failure condition \(\varphi _f\) become true. Rule \( G_{init}\) initialises persistence by setting the program in the declarative goal to be \( P \rhd P\), i.e. if P fails try P again, and rule \( G_;\) takes care of performing a single step on an already initialised program. Finally, the derivation rule \(G_{\rhd }\) re-starts the original program if the current program has finished or got blocked (when neither \(\varphi _s\) nor \(\varphi _f\) is true).

Fig. 7.
figure 7

Complete intention-level Can semantics.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Archibald, B., Calder, M., Sevegnani, M., Xu, M. (2023). Quantitative Verification and Strategy Synthesis for BDI Agents. In: Rozier, K.Y., Chaudhuri, S. (eds) NASA Formal Methods. NFM 2023. Lecture Notes in Computer Science, vol 13903. Springer, Cham. https://doi.org/10.1007/978-3-031-33170-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33170-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33169-5

  • Online ISBN: 978-3-031-33170-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics