Skip to main content
Log in

Improving domain-independent intention selection in BDI systems

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

The Belief Desire Intention (BDI) agent paradigm provides a powerful basis for developing complex systems based on autonomous intelligent agents. These agents have, at any point in time, a set of intentions encoding the various tasks the agent is working on. Despite its importance, the problem of selecting which intention to progress at any point in time has received almost no attention and has been mostly left to the programmer to resolve in an application-dependent manner. In this paper, we implement and evaluate two domain-independent intention selection mechanisms based on the ideas of enablement checking and low coverage prioritisation. Through a battery of automatically generated synthetic tests and one real program, we compare these with the commonly used intention selection mechanisms of First-In-First-Out (FIFO) and Round Robin (RR). We found that enablement checking, which is incorporated into low coverage prioritisation, is never detrimental and provides substantial benefits when running vulnerable programs in dynamic environments. This is a significant finding as such a check can be readily applied to FIFO and RR, giving an extremely simple and effective mechanism to be added to existing BDI frameworks. In turn, low coverage prioritisation provides a significant further benefit.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. As customary in agent programing, we use the terms event and goal interchangeably. This is due to seeing goals procedurally as “goals-to-do” (i.e., respond to events), rather that the alternative “goals-to-be” perspective taken in agent theory [9, 24], for example. Nonetheless, we recognise the existence of other approaches to BDI programming with a more declarative perspective on goals.

    Fig. 1
    figure 1

    A typical BDI Agent Programming Framework

  2. Such techniques can still be integrated with domain-specific schemes, see the discussion in Sect. 7.

  3. Where no intention is enabled some non-enabled intention is selected. Progressing this intention will, if possible, result in failure recovery by choosing a different plan for the current, or some ancestor goal. If no alternative plan is available, it will result in the failure of that intention.

  4. Remember that BDI plan libraries are generally developed in a modular, incremental, and independent manner, so plans for \(G_1\) and \(G_4\) may have been developed separately. Hence the context condition of plan \(P_2\) may not account for the incomplete coverage of plans \(P_6\)\(P_8\).

  5. Of course, the more domain constraints encoded, the more accurate coverage estimations will be obtained. In practice, we expect many domain constraints are readily available at design time.

  6. We assume the product, conjunction, and maximum over an empty set of elements is equal to 1, \({\mathtt {false}}\), and 0, respectively. Also, for legibility, we shall sometime abuse notation and treat lists or sequences (e.g., a plan body program \(P\) or intention \(I\)) as sets.

  7. Observe that it is also possible to compute (and store) offline the coverage of every partially executed plan, so that even \(C(I)\), for any partially executed intention \(I\), can be reduced to a table lookup.

  8. The simplification of at most one subgoal posting per plan and no overlap between plans’ context conditions is not trivial, but we believe is justifiable to enable us to gain a well-structured understanding of the intention selection approaches.

  9. Work has been done to recognise these p-effects and ensure that the agent does not itself undo them prematurely [31].

  10. We found no interesting experimental results when varying this value, other than simply confirming the expected fact that the greater the setup distance, the more likely it is that this dependency will be broken, causing the intentions to fail or suspend more frequently.

  11. We focused only on enablement checking because, as seen from the experiments reported in Sect. 4, it is after all what yields most benefits. Moreover, the program structures to be used include recursive subgoal postings, which cannot be, at this stage, handled by the low coverage prioritisation mechanism.

  12. As described in Sect. 3, this is measured as the number of failure recoveries divided by the number of scheduler steps.

References

  1. Benfield, S. S., Hendrickson, J., & Galanti, D. (2006). Making a strong business case for multiagent technology. In Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 10–15).

  2. Bordini, R. H., Braubach, L., Dastani, M., Fallah-Seghrouchni, A., Gómez Sanz, J. J., Leite, J., et al. (2006). A survey of programming languages and platforms for multi-agent systems. Informatica Slovenia, 30(1), 33–44.

    MATH  Google Scholar 

  3. Bordini, Rafael H., & Moreira, Alvaro F. (2004). Proving BDI properties of agent-oriented programming languages. Annals of Mathematics and Artificial Intelligence, 42(1–3), 197–226.

    Article  MATH  MathSciNet  Google Scholar 

  4. Bordini, R. H., Ana, L., Bazzan, C., de Oliveira Jannone, R., Basso, D. M., Vicari, R. M. & Lesser, V. R. (2002). AgentSpeak(XL): Efficient intention selection in BDI agents via decision-theoretic task scheduling. In Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 1294–1302).

  5. Bordini, R. H., Fred Hübner, J., & Wooldridge, M. (2007). Programming multi-agent systems in agentspeak using Jason. New York: Wiley Series in Agent Technology. John Wiley & Sons. ISBN 0470029005.

    Book  MATH  Google Scholar 

  6. Bratman, M. E., Israel, D. J., & Pollack, M. E. (1988). Plans and resource-bounded practical reasoning. Computational Intelligence, 4(3), 349–355.

    Article  Google Scholar 

  7. Busetta, P., Rönnquist, R., Hodgson, A., & Lucas, A. (1999). JACK intelligent agents: Components for intelligent agents in Java. AgentLink Newsletter, 2, 2–5.

    Google Scholar 

  8. Clement, B. J., Durfee, E. H., & Barrett, A. C. (2007). Abstract reasoning for planning and coordination. Journal of Artificial Intelligence Research, 28, 453–515.

    MATH  Google Scholar 

  9. Cohen, P. R., & Levesque, H. J. (1990). Intention is choice with commitment. Artificial Intelligence, 42, 213–261.

    Article  MATH  MathSciNet  Google Scholar 

  10. Dastani, M., de Boer, F. S., Dignum, F., & Meyer, J.-J. (2003). Programming agent deliberation: An approach illustrated using the 3APL language. In Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 97–104).

  11. de Boer, F. S., Hindriks, K. V., van der Hoek, W., & Meyer, J.-J. (2007). A verification framework for agent programming with declarative goals. Journal of Applied Logic, 5(2), 277–302.

    Article  MATH  MathSciNet  Google Scholar 

  12. Decker, K., & Lesser, V. R. (1993). Quantitative modeling of complex environments. International Journal of Intelligent Systems in Accounting, Finance and Management. Special Issue on Mathematical and Computational Models and Characteristics of Agent Behaviour, 2, 215–234.

    Google Scholar 

  13. Georgeff, M. P., & Felix Ingrand, F. (1989). Decision making in an embedded reasoning system. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 972–978).

  14. Gomes, C. P., Sabharwal, A., & Selman, B. (2009). Model counting. In A. Biere, M. Heule, H. van Maaren, & T. Walsh (Eds.), Handbook of satisfiability (Vol. 185, pp. 633–654)., Frontiers in artificial intelligence and applications Amsterdam: IOS Press. doi:10.3233/978-1-58603-929-5-633. ISBN 978-1-58603-929-5.

    Google Scholar 

  15. Grant, J., Kraus, S., Perlis, D., & Wooldridge, M. (2010). Postulates for revising BDI structures. Synthese, 175, 127–150.

    Article  Google Scholar 

  16. Hindriks, Koen V., de Boer, Frank S., van der Hoek, Wiebe, & Meyer, John-Jules. (1999). Agent programming in 3APL. Autonomous Agents and Multi-Agent Systems, 2(4), 357–401.

    Article  Google Scholar 

  17. Horling, B., Lesser, V., Vincent, R., Wagner, T., Raja, A., & Zhang, S., et al. (1999). The TAEMS White Paper. http://mas.cs.umass.edu/paper/182.

  18. Horling, B., Lesser, V., Vincent, R., & Wagner, T. (2006). The soft real-time agent control architecture. Proceedings of Autonomous Agents and Multi-Agent Systems AAMAS, 12(1), 35–92.

    Article  Google Scholar 

  19. Huber, M. J. (1999). JAM: A BDI-theoretic mobile agent architecture. In Proceedings of the Annual Conference on Autonomous Agents (AGENTS) (pp. 236–243).

  20. Jain, R., Chiu, D.-M., & Hawe, W. R. (1984). A quantitative measure of fairness and discrimination for resource allocation in shared computer system. Eastern Research Laboratory, Digital Equipment Corporation.

  21. Kinny, D., & Georgeff, M. P. (1991). Commitment and effectiveness of situated agents. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 82–88).

  22. Padgham, L., & Winikoff, M. (2004). Developing intelligent agent systems: A practical guide. Sidney: Wiley Series in Agent Technology, John Wiley & Sons.

    Book  Google Scholar 

  23. Pollack, M. E. (1992). The uses of plans. Artificial Intelligence, 57(1), 43–68.

    Article  MathSciNet  Google Scholar 

  24. Rao, A. S., & Georgeff, M. P. (1991). Modeling rational agents within a BDI-architecture. In Proceedings of Principles of Knowledge Representation and Reasoning (KR) (pp. 473–484).

  25. Rao, A. S. (1996). Agentspeak(L): BDI agents speak out in a logical computable language. In Proceedings of the European Workshop on Modelling Autonomous Agents in a Multi Agent World (MAAMAW) (pp. 42–55).

  26. Rao, A. S., & Georgeff, M. P. (1992). An abstract architecture for rational agents. In Proceedings of Principles of Knowledge Representation and Reasoning (KR) (pp. 438–449).

  27. Sang, T., Bacchus, F., Beame, P., Kautz, H. A., & Pitassi, T. (2004). Combining component caching and clause learning for effective model counting. In Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT).

  28. Sardina, S., & Padgham, L. (2011). A BDI agent programming language with failure recovery, declarative goals, and planning. Autonomous Agents and Multi-Agent Systems, 23(1), 18–70.

    Article  Google Scholar 

  29. Singh, D., Sardina, S., & Padgham, L. (2010). Extending BDI plan selection to incorporate learning from experience. Journal of Robotics and Autonomous Systems, 58, 1067–1075.

    Article  Google Scholar 

  30. Thangarajah, J., Winikoff, M., Padgham, L., & Fischer, K. (2002). Avoiding resource conflicts in intelligent agents. In F. van Harmelen (Ed.), Proceedings of the European Conference in Artificial Intelligence (ECAI) (pp. 18–22).

  31. Thangarajah, J., Padgham, L., & Winikoff, M. (2003a). Detecting and exploiting positive goal interaction in intelligent agents. In Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 401–408).

  32. Thangarajah, J., Padgham, L., & Winikoff, M. (August 2003b).Detecting and avoiding interference between goals in intelligent agents. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 721–726).

  33. Thangarajah, J., Sardina, S., & Padgham, L. (2012). Measuring plan coverage and overlap for agent reasoning. In Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 1049–1056).

  34. Vikhorev, K., Alechina, N., & Logan, B. (2011). Agent programming with priorities and deadlines. In Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 397–404).

  35. Waters, M., Padgham, L., & Sardina, S. (2014). Evaluating coverage based intention selection. In Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 957–964).

  36. Wei, W., & Selman, B. (2005). A new approach to model counting. In Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT), volume 3569 of Lecture Notes in Computer Science (pp. 324–339). Heidelberg: Springer Berlin Heidelberg

Download references

Acknowledgments

We acknowledge the support of the Australian Research Council under Discovery Project DP1094627 and Agent Oriented Software for providing us with a Jack license. We would also like to thank the anonymous reviewers for their useful comments. Part of this work was done while the third author was on sabbatical at Sapienza Universita’ di Roma, Rome, Italy.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sebastian Sardina.

Appendices

Appendix 1: Synthetic domain details

We provide here further details on how the synthetic testbed used in Sect. 4 was produced.

1.1 Goal-plan trees

The goal-plan trees induced by the agent’s plan library can be considered perfect binary trees which have had selected branches “pruned” in order to create coverage gaps. We define the depth of a goal-plan tree according to the maximum depth to which subgoals are posted. The top-level goal is at depth 0, a plan relevant to a goal at depth \(d\) is also at depth \(d\), and subgoals posted by plans at depth \(d\) are at depth \(d+1\).

1.2 Gap levels

If a goal-plan tree has a gap level at depth \(d\), then for every subgoal \(!G\) at depth \(d\) there is at least one possible world state where \(!G\) has no applicable plan. These coverage gaps are modelled on p-effects, i.e., \(!G\) is handled by a single plan, \(!G : \phi \leftarrow \delta \), and its context condition \(\phi \) is set to true in the plan which posted \(!G\). The top-level goal has no coverage gaps, so in a tree with depth \(d\) any gap layers must be exist between depths 1 and \(d\). So, if a goal-plan tree has a depth of \(d\) and \(g\) gap levels, the number of possible combinations of gap layers is \(\left( {\begin{array}{c}d\\ g\end{array}}\right) = \frac{d!}{(d-g)!g!}\).

In these experiments an agent is tasked with achieving 10 top-level goals, each of which can be decomposed into a binary goal-plan tree with a maximum depth of 4, and 2 gap levels. There are therefore 6 possible combinations of gap levels available. In order to have an even distribution of such structures, the gap layers in each of the ten goal-plan trees is randomly selected before each test.

1.3 Setting the coverage

The procedure for setting the coverage of a binary goal-plan tree is shown in Algorithm 1. As can be seen, it is a random, recursive process which sets the coverage of the top-level goal by setting the distributions of the context conditions of the various plans which occur in the tree. The random element ensures that even if two top-level goals have the same coverage, and the goal-plan trees beneath them have the same depth and the same gap layers, the coverage gaps within the trees will still be different.

figure a

1.4 Test steps

For each test, the following steps are performed:

  1. 1.

    Ten different top-level goals are randomly selected from the agent’s library.

  2. 2.

    A random coverage level \(c \in [0.01, 0.99)\) is selected.

  3. 3.

    The coverage of each of the Ten top-level goals is set to \(c\).

  4. 4.

    A random environmental dynamism level \(d \in [0.0, 1.0)\) is selected.

  5. 5.

    The initial state of the environment is set by resampling each environment variable, as per each probability distribution. This state is then saved.

  6. 6.

    For each intention selection mechanism being tested:

    1. (a)

      The Ten top-level goals selected in step 1 are posted, and the results of the agent runs are recorded.

    2. (b)

      The environment is re-set to the state saved in step 5.

Appendix 2: Tower of Hanoi agent design

The standard Tower of Hanoi puzzle consists of three pins, \(p_A\), \(p_B\) and \(p_C\), and \(n\) discs, \(d_1, d_2, \ldots , d_n\), numbered according to their size, with \(d_1\) being the smallest and \(d_n\) the largest. In the game’s initial state, all of the discs are placed on to pin \(p_A\) in order of size, i.e., with disc \(d_n\) at the bottom of the stack and \(d_1\) at the top. The goal-plan tree structure used by the agent to solve a single instance of the Tower of Hanoi is shown in Fig. 9. In our tests, the agent is tasked with solving several Tower of Hanoi instances simultaneously. Therefore, at any point in time, the agent’s intention base may comprise several partially-executed instances of this intention.

The goal-plan tree’s top-level goal, \({ SolveTower }\), is handled by a single plan, \({ SubDivide }\), which divides the Tower of Hanoi problem into a series of smaller sub-problems. The branch stemming from the plan \({ RecursiveSolution }\) is a BDI implementation of a recursive solution to the Tower of Hanoi, while the alternative branch, \({ PathPlanningSolution }\), solves the tower by searching the state-space and planning a sequence of moves. We will now describe each component of the goal-plan tree in more detail (note that when referring to discs, \(d_n\) will always refer to the largest disc, while all other subscripts denote variables).

figure b

1.1 Moving discs

The goal \({ MakeMove(d_k, p_x, p_y) }\) is posted by the agent whenever it needs to move disc \(d_k\) from pin \(p_x\) to pin \(p_y\), and is handled by a single plan, the primitive action \({ MoveDisc }\). The context condition of \({ MoveDisc }\) first checks that the move is legal, i.e., that disc \(d_k\) is at the top of \(p_x\), and is smaller than the disc at the top of \(p_y\). It then checks that the move is available, given the constraints imposed by the current state of the tower engines. If these conditions hold, then the plan will be selected and the disc moved, if not, then the \({ MakeMove }\) goal will have no applicable plans.

1.2 Solving sub-problems

The traditional solution to the Tower of Hanoi is recursive, i.e., to solve a tower of \(n\) discs, it first solves a tower of \(n-1\) discs. The base case, a tower of just one disc, can be solved trivially. The plan \({ SubDivide }\) sub-divides a Tower of Hanoi into sub-problems in a similar way. The aim of this plan is to stack discs \(d_n, d_{n-1}, \ldots , d_1\) (where \(d_n\) is the largest and \(d_1\) the smallest) on to pin \(C\). To achieve this it first attempts to stack \(d_n\) on to \(p_C\), and then both \(d_n\) and \(d_{n-1}\), etc. The subgoal \({ SolveSubTower(d_k) }\) represents the need to create a stack of discs, \(d_n, d_{n-1}, \ldots , d_k\), on to \(p_C\). \({ SubDivide }\) therefore synchronously posts \(n\) instances of this goal, i.e., \({ SolveSubTower(d_n) },{ SolveSubTower(d_{n-1}) },\ldots ,{ SolveSubTower(d_1) }\).

Fig. 9
figure 9

The goal-plan tree structure for the Tower of Hanoi agent

1.3 Recursive solution

The agent’s preferred techique for solving a stacking sub-problem is a modification of the traditional recursive solution. From an arbitrary initial state, this solution will build a stack of any given size on any pin using the fewest possible moves. The pseudocode for this algorithm is shown in Algorithm 2. For any values of \(i, j\) such that \(1 \le j \le i \le n\), the procedure will stack discs \(d_i, d_{i-1}, \ldots , d_j\) onto a given pin \(p_x\).

The structure of this algorithm is reflected directly in the structure of the goal-plan tree in Fig. 9. Calls to procedure BuildStack are represented by the goal of the same name, and the procedure body is implemented in the plan \({ RecursivelyBuildStack }\). Calls to procedure GetDiscToPin are represented by the goal of the same name, with its base and recursive cases implemented in plans \({ BaseCase }\) and \({ RecursiveCase }\). The plan \({ RecursiveSolution }\) handles a subgoal of the form \({ SolveSubTower(d_k) }\), which, as described above, represents the need to stack discs \(d_n, d_{n-1}, \ldots , d_k\) onto \(p_C\). The plan thus posts a single subgoal, \({ BuildStack(d_n, d_k, p_C) }\), which effectively “calls” procedure BuildStack, beginning the recursive process.

The goal \({ BuildStack(d_i, d_j, p_x) }\) represents the need to build a stack of discs, \(d_i, d_{i-1},\) \( \ldots , d_j\), on pin \(p_x\). The plan \({ RecursivelyBuildStack }\) handles this goal, and achieves it by first getting disc \(d_i\) on to the top of pin \(p_x\), then placing disc \(d_{i-1}\) on top of \(d_i\), etc., until the stack is complete. It does this by posting a series of goals of type \({ GetDiscToPin }\).

As the name implies, the goal \({ GetDiscToPin(d_k, p_x) }\) represents the desire to place disc \(d_k\) on to the top of pin \(p_x\). It is handled by two plans, \({ BaseCase }\) and \({ RecursiveCase }\). The context condition of \({ BaseCase }\) is simply the boolean condition of the if-then construct in procedure GetDiscToPin, i.e., it checks to see if the goal \({ GetDiscToPin }\) has already been achieved. Thus, if selected, this plan need not do anything. The context condition of \({ RecursiveCase }\) is the negation of the same boolean condition. This plan first finds the “temporary” pin, \(p_{tmp}\), i.e., the pin which is neither the destination pin, \(p_x\), nor the pin which is disc \(d_k\)’s current location. It then posts a subgoal, \({ BuildStack(d_{k-1}, d_1, p_{tmp}) }\). The purpose of this recursive posting is to stack all discs smaller than \(d_k\) onto the “temporary” pin, thus clearing the way for \(d_k\) to be moved on onto \(p_x\). This move is achieved by posting a \({ MakeMove }\) subgoal.

While simple and efficient, this method is brittle. When following this shortest route, it pays no attention to the current state of the engines, meaning that it may post a \({ MakeMove }\) goal that is not currently achievable. This might (depending on the intention selection scheme used) cause a goal failure which will propagate up to the \({ RecursiveSolution }\) plan.

1.4 Path planning solution

If the recursive solution fails, the agent has an alternative, path-planning based way of resolving the \({ SolveSubTower }\) goal. The plan \({ PathPlanningSolution }\) handles a goal of the form \({ SolveSubTower(d_k) }\), and simply posts a single goal, \({ PlanPath(d_k, p_C) }\).

The goal \({ PlanPath(d_k, p_x) }\) represents the need to plan a path to a state in which \(d_n, d_{n-1}, \ldots , d_k\) are stacked on \(p_x\). When it is posted, a breadth-first search over the space of all possible moves is performed and, given the current state of the engines, the shortest sequence of moves to all tower states which satisfy the goal are found. A separate instance of the plan \({ PathOption }\) is generated per sequence, and the plan instance with the shortest sequence is then selected for execution. The \({ PathOption }\) plan follows the path by posting a series of \({ MoveDisc }\) goals.

It is of course possible that while the path is being followed, the engine deteriorates in such a way as to block the path. In this case, the \({ PathOption }\) plan might fail while attempting an unavailable move. But rather than this failing the intention as a whole, this failure will prompt the goal \({ PlanPath }\) to be re-posted, and an alternative path will be sought. However, it is still posible for the path-planning solution to fail. If the engines are sufficiently deteriorated, it is possible that no desired states are accessible; in this case no applicable plans will be generated for the goal \({ PlanPath }\). Depending on the intention selection scheme in place, this may cause a goal failure which will propagate to the intention’s top-level goal.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Waters, M., Padgham, L. & Sardina, S. Improving domain-independent intention selection in BDI systems. Auton Agent Multi-Agent Syst 29, 683–717 (2015). https://doi.org/10.1007/s10458-015-9293-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-015-9293-5

Keywords

Navigation