Autonomous Agent Behaviour Modelled in PRISM – A Case Study
 4 Citations
 1.2k Downloads
Abstract
Formal verification of agents representing robot behaviour is a growing area due to the demand that autonomous systems have to be proven safe. In this paper we present an abstract definition of autonomy which can be used to model autonomous scenarios and propose the use of smallscale simulation models representing abstract actions to infer quantitative data. To demonstrate the applicability of the approach we build and verify a model of an unmanned aerial vehicle (UAV) in an exemplary autonomous scenario, utilising this approach.
Keywords
Unmanned Aerial Vehicle Minimum Probability Abstract Action Hybrid Automaton Emergency Landing1 Introduction
Autonomous systems have the ability to decide at runtime what to do and how to do it. A critical question is how this decision making process is implemented.
Increasingly, autonomous systems are being deployed within the public domain (e.g. driverless cars, delivery drones). Naturally, there is concern that these systems are reliable, efficient and  most of all  safe. Although testing is a necessary part of this process, simulation and formal verification are key tools, especially at the early stages of design where experimental testing is both infeasible and dangerous. Simulation allows us to view the continuous dynamics and monitor behaviour of a system. On the other hand, model checking allows us to formally verify properties of a finite representation. Whereas the simulation model is close to an implementation, simulation runs are necessarily incomplete. Verification models, on the other hand, require us to abstract more coarsely.
The decisions made by an autonomous agent depend on the current state of the environment, specifically in terms of data perceived by the agent from its sensors. If model checking is to be used for the verification of autonomous systems we must reflect the uncertainty associated with the state of the environment by using probabilistic model checking.
We propose a framework for analysing autonomous systems, specifically to investigate decisionmaking, using probabilistic model checking of an abstract model where quantitative data for abstract actions is derived from smallscale simulation models. We illustrate our approach for an example system composed of a UAV searching for and collecting objects in an arena. The simulation models for abstract actions are generated using the objectoriented framework Simulink and the abstract models are specified and verified using the probabilistic model checker PRISM. In our example, autonomous decision making involves making a weighted choice between a set of possible actions, and is loosely based on the BeliefDesireIntention architecture [7].
Previous work in which model checking is used to verify autonomy includes [3] in which the decision making process is verified in isolation, while our aim is to integrate this process with the autonomous agent as a whole. Other research includes an investigation of the cooperative behaviour of robots, where each robot is represented by a hybrid automaton [1], and verification of a consensus algorithm using experimental verification and an external observer [2].
2 Autonomy
In order to formally define autonomous behaviour, we introduce finite state machines which abstract the autonomous actions independent of the agent type.
Before we give the formal definitions we require the following notation. For a finite set of variables V, a valuation of V is a function s mapping each variable in V to a value in its finite domain. Let \( val (V)\) be the set of valuations of V. For any \(s \in val (V)\), \(v \in V\) and value x of V, let \(s[v{:=}x]\) and \(s[v{\pm }x]\) be the valuations where for any \(v' \in V\) we have \(s[v{:=}x](v')=x\) and \(s[v{\pm }x](v')=s(v'){\pm }x\) if \(v'{=}v\) and \(s[v{:=}x](v')=s[v{\pm }x](v')=s(v')\) otherwise. For a finite set X, a probability distribution over X is a function \(\mu : X \rightarrow [0, 1]\) such that \(\sum _{x \in X} \mu (x) = 1\). Let \( Dist (X)\) be the set of distributions over X.
Definition 1
A probabilistic finitestate machine is a tuple \(\mathcal {M} {=} (V,I,A,T)\) where: V is a finite set of variables; \(I\subseteq val (V)\) a set of initial states; A a finite set of actions and \(T: val (V) {\times } A \rightarrow Dist (S)\) a (partial) transition function.
The set of states of \(\mathcal {M} {=} (V,I,A,T)\), denoted S, is the set of valuations \( val (V)\) of V. Let A(s) denote the actions available from state s, i.e. the actions \(a \in A\) for which T(s, a) is defined. In state s an action is chosen nondeterministically from the available actions A(s) and, if action a is chosen, the transition to the next state is made according to the probability distribution T(s, a). A probabilistic finitestate machine describes a system without autonomy, we introduce this through a weight function adding decision making to the finitestate machine.
Definition 2
An autonomous probabilistic finitestate machine is a tuple \(\mathcal {A} = (V,I,A,T,w)\) where (V, I, A, T) is a probabilistic finitestate machine and w is a weight function \(w: val (V) {\times } A \rightarrow [0,1]\) such that for any \(s \in val (V)\) and \(a \ne b \in A\) we have \(w(s,a) \ne w(s,b)\) and \(w(s,a){>}0\) implies \(a \in A(s)\).
3 UAV Example
In this case study we consider a specific example (a simple search and retrieve example, with a UAV in a finite sized arena) to demonstrate our approach.

\( obj \) the number of objects which have not been found;

\( pos {=} ( pos _x, pos _y)\) the position of the UAV in the arena;

\( ret {=} ( ret _x, ret _y)\) the return coordinates when search is interrupted;

m the current mode of the UAV;

t the mission time;

b the battery charge level.
Each state \(s\in val(V)\) of \(\mathcal {A}\) is a valuation of these variables. The transition and weight functions T and w are based on Fig. 1. We focus on the target approach and search modes of the UAV.
4 Results
We have modelled our scenario in the probabilistic model checker PRISM [5], building smallscale simulation models to determine individual abstract actions to generate probabilistic and timing values. To encode the timing values in PRISM as integer variables we take the floor and ceiling, introducing nondeterminism into the model and upper and lower bounds for properties [4]. The experiments were performed on a computer with 16GB RAM and a 2.3 GHz Intel Core i5 processor.
The main properties of interest concern the UAV successfully completing the mission (finding and depositing all objects within the time limit) and failing the mission (either due to an emergency landing, missing some objects or running out of time). We have also considered other properties including how often the UAV drops an object and how often it recharges during a mission, more details can be found in the repository [6].
For the first scenario, the maximum and minimum probabilities of the UAV completing a mission are 0.7610 and 0.6787 respectively. The maximum and minimum probabilities of running out of time are negligible, searching the arena and missing some objects are 0.2617 and 0.1808, and 0.0628 and 0.0552 for performing an emergency landing. Figure 2 shows the maximum and minimum probability of a successful mission within a time bound as well as the results obtained when the nondeterminism is replaced by a uniform random choice. The probability increases after a threshold time as the UAV has to search a proportion of the arena before finding all objects.
5 Conclusions
We have proposed using smallscale simulation models to inform probabilistic models used for verification. The simulation models can be used to provide quantitative data for abstract actions. The probabilistic models can be used for fast property specific verification, that is not possible using simulations alone.
Our approach is highly adaptable; once the initial smallscale simulation and probabilistic models have been set up, different decision algorithms can be easily substituted and analysed. Our example illustrates the use of a weight function for decision making. In a more extensive scenario the weight function would be more complex (e.g. involving current values associated with all sensors and guiding systems). Our use of nondeterminism when approximating quantitative data obtained from the smallscale simulation models allows us to provide an range of uncertainty for our results. We aim to formally prove a link between the simulation and the abstract model to allow us to infer results from the abstract model for the actual system. To allow the analysis of more complex scenarios we plan to incorporate abstraction techniques.
Notes
Acknowledgments
This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/N508792/1].
References
 1.Chaimowicz, L., Campos, M.F.M., Kumar, V.: Hybrid systems modeling of cooperative robots. In: Proceedings International Conference Robotics and Automation (ICRA 2003), pp. 4086–4091. IEEE Press, New York (2003)Google Scholar
 2.Cook, J., Hu, G.: Experimental verification and algorithm of a multirobot cooperative control method. In: Proceedings IEEE/ASME International Conference Advanced Intelligent Mechatronics (AIM 2010), pp. 109–114. IEEE Press, New York (2010)Google Scholar
 3.Dennis, L., Fisher, M., Lincoln, N., Lisitsa, A., Veres, S.: Practical verification of decisionmaking in agentbased autonomous systems. Autom. Softw. Eng., pp. 1–55 (2014). http://link.springer.com/article/10.1007/s1051501401689
 4.Kattenbelt, M., Kwiatkowska, M., Norman, G., Parker, D.: A gamebased abstractionrefinement framework for Markov decision processes. Formal Methods Syst. Des. 36(3), 246–280 (2010)CrossRefzbMATHGoogle Scholar
 5.Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic realtime systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011)CrossRefGoogle Scholar
 6.PRISM model repository (2016). http://dx.doi.org/10.5525/gla.researchdata.274
 7.Veres, S., Molnar, L., Lincoln, N., Morice, C.: Autonomous vehicle control systems  a review of decision making. Proc. Inst. Mech. Eng. Part I: J. Syst. Control Eng. 225(2), 155–195 (2011)Google Scholar
Copyright information
Open Access This chapter is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, a link is provided to the Creative Commons license and any changes made are indicated.
The images or other third party material in this chapter are included in the work’s Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work’s Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material.