Stochastic Reachability with Constraints
Abstract
This chapter is concerned with leveraging the concept of stochastic reachability such that it may capture state and time constraints, dynamics of the target set, randomisation of the time horizon and so on. We formulate stateconstrained stochastic reachability for a stochastic hybrid process with no controllers. The idea is that we specialise the stochastic reachability problem asking that the desired trajectories to satisfy additional conditions (e.g. to avoid an obstacle or to go through a particular set). Analytic solutions are provided using the infinitesimal generator of the process. Concretely, the reach set probabilities in the case of stateconstrained reachability are solutions for a Dirichlet boundary value problem defined using this generator. The complexity of this problem is due also to the particular expression of this integrodifferential operator. To overcome this difficulty, we propose to consider a viewpoint approach and to solve the problem for different abstraction levels or for different components, which can be defined in the structure of a stochastic hybrid system.
Keywords
Infinitesimal Generator Discrete Transition Reachability Problem Reach Probability Stochastic Hybrid System10.1 Overview
This chapter is concerned with leveraging the concept of stochastic reachability such that it may capture state and time constraints, dynamics of the target set, randomisation of the time horizon and so on. These represent hot topics in modern stochastic reachability research. Analytic solutions will be provided only in the case of stateconstrained stochastic reachability, while existing research avenues will be described for the other extensions of stochastic reachability.

The ‘reachavoid’ problem: determine the set of initial conditions for which one can find at least one control strategy to steer the system to a target set while also satisfying some constraints regarding the avoidance of certain known obstacles.

The viability problem: determine the set of initial conditions for which one can find at least one control strategy to keep the system in a given set.

The scheduling problem: determine the set of initial conditions for which one can find at least one control strategy to steer the system to a target set before it visits another state.
In this chapter, we formulate stateconstrained stochastic reachability for a stochastic hybrid process with no controllers. The controlled version will be presented in a subsequent volume of this title. Formulation of the problem is nice and intuitive, but finding computational solutions is a challenging task. An analytic method based on PDEs will be presented. To investigate trajectories that go towards a target while satisfying some constraints, the reader more interested in simulations will need characterisations based on other probabilistic concepts, like entropy and probability current. For the discrete case, intuitive representations can be found using the connections between Markov chains and electrical networks.
Objectives
 1.
To define formally the concept of stateconstrained stochastic reachability.
 2.
To provide analytical characterisations of the above concept.
 3.
To explain the deployment of stateconstrained stochastic reachability at the different scales of an SHS model.
 4.
To give further generalisations of the above concept.
10.2 Mathematical Definitions
In this section, we extend the concept of stateconstrained reachability defined in the literature (see [14] and references therein) doing so only for discrete/continuous time Markov chains (with discrete state space) to continuous time/space Markov processes. Further, we study this concept for stochastic hybrid processes.
 Obstacle avoidance reachability (see Fig. 10.1). In this interpretation B is a safe set, while A is not. The goal is to compute the probability, denoted byof all trajectories that start from a given initial state x and hit the set B without hitting the state set A.$$\begin{aligned} p_{\lnot A}^{B}(x), \end{aligned}$$
 Waypoint reachability (see Fig. 10.2). In this interpretation we are interested in computing the probability, denoted byof all trajectories that hit B only after hitting A.$$\begin{aligned} p_{A}^{B}(x), \end{aligned}$$
Consequently, stateconstrained reachability is related to some classical topics treated in the literature of Markov processes, like the first passage problem [109], excursion theory [100] and estimation of the equilibrium potential of the capacitor of the two sets [37]. These references provide theoretical characterisations for probabilities (10.1) for different classes of Markov processes. Here, the scope is not to survey all these characterisations but to identify the appropriate analytical solutions of this problem.
10.3 Mathematical Characterisations
The main scope of this section is to prove that stateconstrained reachability probabilities can be characterised as solutions of certain boundary value problems expressed in terms of the infinitesimal generator of the given stochastic hybrid process.
We will use the concepts of excessive function and kernel operator V that can be introduced with respect to a Markov process M (see Chap. 2).
The following assumption is essential for the results of this section.
Assumption 10.1
Suppose that M is a transient Markov process, i.e., there exists a strictly positive Borelmeasurable function q such that Vq is a bounded function.
Using [37], we have the following characterisation.
Proposition 10.1
 (i)
\(0\leq p_{A}^{B}\leq 1\) a.s. on X,
 (ii)
\(p_{A}^{B}=0\) a.s. on B and \(p_{A}^{B}=1\) on A,
 (iii)
\(p_{A}^{B}\) is the potential of a signed measure ν such that the support of ν ^{+} is contained in A and the support of ν ^{−} is contained in B.
 composition of the hitting operators corresponding to the target sets A and Bwhere provided that T _{ A }, T _{ B }<ζ (ζ is the lifetime of the process);$$\begin{aligned} V^{A\rightarrow B}:=P_{A}\circ P_{B}, \end{aligned}$$
 the probability of hitting A again after n excursions between A and Bwhere φ _{ A } is given by ( 7.3);$$\begin{aligned} p_{n}:=\bigl(V^{A\rightarrow B}\bigr)^{n}\varphi_{A}, \end{aligned}$$
 the probability of hitting A again after ‘infinitely many’ excursions between A and B$$\begin{aligned} \varGamma :=\sum_{n=0}^{\infty }p_{n}. \end{aligned}$$
Proposition 10.2
Proof
Theorem 10.3
This is the main theorem about the characterisation of the stateconstrained reachability. The theorem can be proved for Borel right processes that are stochastic hybrid processes. Stochastic hybrid processes have a continuous dynamics given by some diffusion processes and a discrete dynamics described by a Markov chain. Therefore, the proof is a consequence of the following two lemmas, which are instantiations of the theorem for Brownian motion and Markov chains. We have not found the proofs in any monograph of stochastic processes that treat first time passage problems, excursion theory for Markov processes—see for example [109], therefore we sketch these proofs in the following.
Lemma 10.4
Lemma 10.5
Proof
Lemma 10.6
Proof
Theorem 10.3 characterises the probabilities of the stateconstrained reachability as the solutions for a Dirichlet boundary value problem (DBVP) associated to the generator of the underlying stochastic hybrid process. This generator is a second order elliptic integrodifferential operator and it is known that, for this type of nonlocal operator, the value of the solutions for the DBVP has to be prescribed not only on the boundary of the domain but also in its whole complementary set [95]. Under standard hypotheses, the existence and uniqueness of the solutions for such equations can be proved. The solutions are called potential functions for the underlying Markov process and they play the same role as harmonic functions for the Laplace operator.
For the verification problems defined in the context of stochastic hybrid systems, the DBVP defined in Theorem 10.3 will have to be solved only locally in the appropriate modes. In this way, the quite difficult boundary condition (that results from the definition of the infinitesimal generator) can be avoided. Then we consider that most numerical solutions available for the Dirichlet boundary value problem corresponding to second order elliptic partial differential operators can be extended in a satisfactory manner to solve our problem.

at the ‘low level’, i.e., continuous dynamics in the operating modes are controlled diffusions,

or, at the ‘decision level’, i.e., the jumping structure is governed by a decision process (usually in the form of a Markov decision process).
10.4 StateConstrained Reachability in a Development Setting
So far, stateconstrained reachability has been defined and investigated in the abstract setting provided by the SHS models. We have shown that the reach probabilities represent the solution of an elliptic integrodifferential equation. This problem can be efficiently solved in some cases, for example in the case where the equation can be reduced to an elliptic one. However, for complex systems, the integrodifferential equation can be difficult to solve. In this case, the abstract model of SHS should be replaced with more manageable models. This can be done in many ways, like through approximations, functional abstractions, model reductions and so on. Based on existing research we propose a multilayer approach for describing a complex hybrid system with stochastic behaviour. The same system is described using a set of different models, each one constructed at a different level of abstraction. The models can be related by abstraction or refinement maps. This approach makes it possible for us to solve a specific problem for the given system at the right level of abstraction.
In this setting, we search for more manageable solutions for stateconstrained reachability analysis.
10.4.1 Multilayer Stochastic Hybrid Systems
In this subsection, we introduce a multilayer model for stochastic hybrid systems. This is inspired by the viewpoints model from software engineering. There, a system is modularly developed from different perspectives. Each perspective provides a model of the system, called a viewpoint. Then the viewpoints need to be consistently unified to provide the overall system description. This methodology corresponds to a horizontal development philosophy. In this section, we proposed a vertical (or hierarchical) viewpoint approach. In this approach, the system is described by viewpoints constructed on top of each other, each one providing a partial model at a different abstraction level.
The second map Ψ can be defined in various ways, adding flexibility to the viewpoint modelling. For example, Ψ can be generically defined by replacing a certain discrete transition in the dynamics of the viewpoint j with a set of trajectories of the hybrid system described by the viewpoint (j−1). In particular, a single discrete transition can be mapped into the trajectories of a continuous dynamical system. In this way, a viewpoint described by a discrete transition system can be related to another viewpoint described by a hybrid system.
To illustrate the viewpoint approach let us consider a flying aircraft. At the lowest level of detail, its dynamics can be accurately described by a switching diffusion process. In this model of SHS, the hybrid trajectories are continuous and are piecewise diffusion paths. At the most abstract level, the flight can be modelled as a probabilistic timed automaton that is a sort of discrete transition system. In this viewpoint, the aircraft lies in a state for a certain time and then with a given rate it makes a discrete transition. An intermediate viewpoint, between the continuous and discrete, is modelled by a stochastic hybrid system. In the intermediate viewpoint, a discrete transition from the discrete viewpoint is refined into a continuous mode and certain continuous paths from the continuous viewpoint are abstracted away into discrete transitions. Of course, in the stochastic setting there can be many subtle cases, like rates of discrete transitions that depend on the diffusion evolution in a mode.
The utility of a multilayer model consists of the possibility of solving categories of problems at different levels/viewpoints. For example, the stability problems are more efficiently studied in a continuous viewpoint (corresponding to the lowest abstraction level). A safety verification problem can be formally tackled in a discrete viewpoint (corresponding to the highest abstraction level). Many control problems can be suitably studied in the hybrid viewpoint (corresponding to an intermediate level of abstraction).
Stateconstrained reachability has been defined in a viewpoint corresponding to an intermediate discrete/continuous level of abstraction. Since in the previous sections it has been proved that the approach in this viewpoint can lead to problems with integrodifferential operators that can be difficult to solve, it is worthy to try to study the problem at other levels of abstractions/viewpoints. In a discrete viewpoint, one can hope to use probabilistic model checking techniques. In a continuous viewpoint, the welldeveloped mathematical apparatus of diffusion processes is becoming available with specific benefits.
In order to make the stateconstrained reachability approach practical, a further refinement of the mathematical model is necessary. This refinement takes into account the Euclidean space, in which processes evolve. We call such processes spatial SHS. An SHS is called spatial if some of its parameters form together a subspace of the Euclidean spaces \(\mathbb{R}\), \(\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\).
A multilayer model can be fruitfully conjoined with spatial models for designing or for improving control. Suppose that a system that is an ndimensional SHS is a spatial process in a higher level of abstraction obtained by ignoring the nonspatial parameters. With this, one can obtain a viewpoint in which the system is modelled as a spatial Wiener process. The stateconstrained stochastic reachability problem becomes more tractable in this viewpoint. If the reach probability is high and causes of this fact can be detected, then in the original viewpoint a control strategy can be considered that minimises the reach probability. For example, in the case of an air traffic control system, the spatial viewpoint can indicate that the collision probability becomes higher in areas of dense traffic with no coordination. Then a control strategy will design a pathway for the aircraft that avoids the high traffic density regions.
Continuous Viewpoint
The following two examples are inspired from [109].
Example 10.7
Example 10.8
Discrete Viewpoint
Example 10.9
10.4.2 Electrostatic Analogy
It is known that the theory of Markov processes is intimately connected with mathematical physics (see [33]). The solutions of the DBVP from Theorem 10.3 can be characterised as certain potential functions defined on the underlying Markov process state space. These describe the ‘probability distributions’ of a charge that is distributed on the state space where the set A (the obstacle) produces a repulsive force and the set B (the target) yields an attractive force. In potential theory, the physical interpretation of stateconstrained reachability probability considered in this chapter is related to the condenser problem (see [74]). This is described as follows: suppose there are given two disjoint compact conductors A,B in the Euclidean space \(\mathbb{R}^{3}\) of positive capacity [51]. A positive electric unit charge placed on A and a negative unit charge on B, both allowed to distribute freely on the respective sets, will find a state of equilibrium, which is characterised on one hand by minimal energy and on the other hand by constant potential on A and on B (possibly taking out exceptional sets of zero capacity).
10.5 Further Generalisations: Randomised Targets
In this section, we investigate how to define the notion of the stateconstrained reachability when the ‘final goal’ is replaced by a ‘random event’ that takes place at a random time. The objective is to compute the probability of those trajectories that visit a set before ‘something’ random happens.
Leveraging StateConstrained Reachability

level sets for some given functions,

sets of states that validate some logical formulae,

metastable sets for the given process.

the boundary of one mode of the stochastic hybrid process,

a cemetery set where the process is trapped,

a set that is reached according to a statedependent rate and so on.

a discrete transition time from one mode to another of the stochastic hybrid process,

a time of the apparition of a certain event,

a time defined by the until operator in a suitable continuous stochastic logic associated to our hybrid Markov process.
Randomised Stopping Times
 (i)
K(ω,⋅) is a probability measure for each ω∈Ω;
 (ii)
K(⋅,[0,t]) is Open image in new window measurable for each t.
Markovian Randomised Stopping Times
 Markov killing times: T is a Markov killing time for M if under \(\mathbb{P}_{x}\) the killed process (x _{ t }0≤t≤T) is Markovian with the subMarkovian semigroup (Γ _{ t })_{ t≥0}:In addition, we assume that Γ _{ t } f is ℬmeasurable for all t>0 and all positive ℬmeasurable f.$$\begin{aligned} \varGamma_{t}f(x):=\mathbb{P}_{x}\bigl[f(x_{t})1_{(t<T)} \bigr]. \end{aligned}$$
 Terminal times: An Open image in new window stopping time \(T:\varOmega \rightarrow \mathbb{R}_{+}\) is called a terminal time ifidentically on [t<T].$$\begin{aligned} T=t+T\circ \theta_{t} \end{aligned}$$(10.8)

the random time T _{ r } with the stopping measure K _{ t }=e ^{−rt } P _{ t };

∞=lim_{ r→0} T _{ r };

the first entrance/last exit time of a suitable subset B of the state space X;

the random time obtained by killing the process at statedependent rate k(x _{ t }).
A finite fixed time T is not a Markov killing time unless the process is set up as a space–time process and then T becomes a hitting time. As shown by the example of last exit times, the killing times may not be stopping times of the process, in comparison with the terminal times, which necessarily should be stopping times.
The most common examples of terminal times are provided by the hitting times of measurable subsets of state space X. The jumping times in the definition of a stochastic hybrid process are terminal times.
Multiplicative Functionals
A common methodology to obtain randomised stopping times is by using multiplicative functionals. These functionals have a long history in the theory of Markov processes and they have been employed mostly to describe transformations of the trajectories for these processes.
The seminal work about properties of trajectories of a Markov process in connection with multiplicative functionals belongs to E.B. Dynkin [77]. Under some regularity hypotheses, Dynkin proved that transformed processes can be defined such that their trajectories are ‘restrictions’ of the trajectories of the initial process. In [154], Kunita and Watanabe showed that the transformations of a Markov process governed by multiplicative functionals, whose expectations are dominated by 1, preserve some regularity properties of the initial process. Such properties are: (strong) Markovianity, right continuity of the trajectories and quasileft continuity of the stopping time sequences.

the target goal, which may have its own dynamics (as a set or a point mass), or may represent the boundary of a safe set,

the time horizon that could be a randomised stopping time,
10.6 Some Remarks
In this chapter, we extended the socalled constrained reachability problem from the probabilistic discrete case to stochastic hybrid systems. Then we mathematically defined this problem and we obtained the reach probabilities as solutions of a boundary value problem. These characterisations are useful in stochastic control and in probabilistic path planning. In this chapter, the stochastic reachability problem for stochastic hybrid systems has been specialised by introducing constraints relative to the state space. We proved that stateconstrained stochastic reachability is solvable. Moreover, we described how the concept could be leveraged so that it could capture more constraints regarding time and space. Numerical solutions for this problem depend fairly on the underlying model of stochastic hybrid systems. It is clear that the simplest way to deal with this problem is to work either at the bottom level (using only diffusions) or the higher level (using only Markov chains).
References
 11.Barles, C., Chasseigne, E., Imbert, C.: On the Dirichlet problem for secondorder elliptic integrodifferential equations. Preprint (2007) Google Scholar
 14.Baier, C., Haverkort, B.R., Hermanns, H., Katoen, J.P.: Reachability in continuoustime Markov reward decision processes. In: Logic and Automata: History and Perspectives, pp. 53–71 (2007) Google Scholar
 15.Baxter, J.R., Chacon, R.V.: Compactness of stopping times. Z. Wahrscheinlichkeitstheor. Verw. Geb. 40(3), 169–181 (1977) CrossRefMATHMathSciNetGoogle Scholar
 33.Blumenthal, R.M., Getoor, R.K.: Markov Processes and Potential Theory. Academic Press, New York (1968) MATHGoogle Scholar
 37.Bovier, A.: Metastability. Lecture Notes in Mathematics, vol. 1970. Springer, Berlin (2009) Google Scholar
 51.Choquet, G.: Theory of capacities. Ann. Inst. Fourier 5, 131–291 (1953) CrossRefMathSciNetGoogle Scholar
 74.Doob, J.L.: Classical Potential Theory and Its Probabilistic Counterpart. Springer, Berlin (1984) CrossRefMATHGoogle Scholar
 77.Dynkin, E.B.: Markov Processes I. Springer, Berlin (1965) CrossRefMATHGoogle Scholar
 95.Garroni, M.G., Menaldi, J.L.: Second Order Elliptic IntegroDifferential Problems. Chapman & Hall/CRC Press, London/Boca Raton (2002) CrossRefMATHGoogle Scholar
 100.Getoor, R.K.: Excursions of a Markov process. Ann. Probab. 7(2), 244–266 (1979) CrossRefMATHMathSciNetGoogle Scholar
 109.Grimmett, G., Stirzaker, D.: Probability and Random Processes. Oxford University Press, London (1982) MATHGoogle Scholar
 154.Kunita, H., Watanabe, T.: Notes on transformations of Markov processes connected with multiplicative functionals. Mem. Fac. Sci., Kyushu Univ., Ser. A, Math. 17(2), 181–191 (1963) MATHMathSciNetGoogle Scholar
 208.Sharpe, M.: General Theory of Markov Processes. Academic Press, San Diego (1988) MATHGoogle Scholar
 222.Taira, K.: Boundary value problems for elliptic integrodifferential operator. Math. Z. 222, 305–327 (1996) CrossRefMathSciNetGoogle Scholar