Advertisement

Stochastic Reachability with Constraints

  • Luminita Manuela Bujorianu
Chapter
Part of the Communications and Control Engineering book series (CCE)

Abstract

This chapter is concerned with leveraging the concept of stochastic reachability such that it may capture state and time constraints, dynamics of the target set, randomisation of the time horizon and so on. We formulate state-constrained stochastic reachability for a stochastic hybrid process with no controllers. The idea is that we specialise the stochastic reachability problem asking that the desired trajectories to satisfy additional conditions (e.g. to avoid an obstacle or to go through a particular set). Analytic solutions are provided using the infinitesimal generator of the process. Concretely, the reach set probabilities in the case of state-constrained reachability are solutions for a Dirichlet boundary value problem defined using this generator. The complexity of this problem is due also to the particular expression of this integro-differential operator. To overcome this difficulty, we propose to consider a viewpoint approach and to solve the problem for different abstraction levels or for different components, which can be defined in the structure of a stochastic hybrid system.

Keywords

Infinitesimal Generator Discrete Transition Reachability Problem Reach Probability Stochastic Hybrid System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

10.1 Overview

This chapter is concerned with leveraging the concept of stochastic reachability such that it may capture state and time constraints, dynamics of the target set, randomisation of the time horizon and so on. These represent hot topics in modern stochastic reachability research. Analytic solutions will be provided only in the case of state-constrained stochastic reachability, while existing research avenues will be described for the other extensions of stochastic reachability.

State-constrained stochastic reachability represents the stochastic version of some well-developed problems for deterministic systems, including:
  • The ‘reach-avoid’ problem: determine the set of initial conditions for which one can find at least one control strategy to steer the system to a target set while also satisfying some constraints regarding the avoidance of certain known obstacles.

  • The viability problem: determine the set of initial conditions for which one can find at least one control strategy to keep the system in a given set.

  • The scheduling problem: determine the set of initial conditions for which one can find at least one control strategy to steer the system to a target set before it visits another state.

In this chapter, we formulate state-constrained stochastic reachability for a stochastic hybrid process with no controllers. The controlled version will be presented in a subsequent volume of this title. Formulation of the problem is nice and intuitive, but finding computational solutions is a challenging task. An analytic method based on PDEs will be presented. To investigate trajectories that go towards a target while satisfying some constraints, the reader more interested in simulations will need characterisations based on other probabilistic concepts, like entropy and probability current. For the discrete case, intuitive representations can be found using the connections between Markov chains and electrical networks.

Objectives

The objectives of this chapter can be summarised as follows:
  1. 1.

    To define formally the concept of state-constrained stochastic reachability.

     
  2. 2.

    To provide analytical characterisations of the above concept.

     
  3. 3.

    To explain the deployment of state-constrained stochastic reachability at the different scales of an SHS model.

     
  4. 4.

    To give further generalisations of the above concept.

     

10.2 Mathematical Definitions

In this section, we extend the concept of state-constrained reachability defined in the literature (see [14] and references therein) doing so only for discrete/continuous time Markov chains (with discrete state space) to continuous time/space Markov processes. Further, we study this concept for stochastic hybrid processes.

Let us consider a stochastic hybrid process
$$\begin{aligned} \mathbf{M}=(x_{t},\mathbb{P}_{x}) \end{aligned}$$
with state space (X,ℬ(X)). State-constrained reachability analysis denotes a reachability problem with additional conditions (constraints) on system trajectories. Let us consider A,B, two Borel-measurable sets of the state space X with disjoint closures, i.e., We consider two fundamental situations. Suppose the system paths start from a given initial state x and we are interested in a target state set, say B. These trajectories can hit the state set A or not. Therefore, we may define two new concepts:
  • Obstacle avoidance reachability (see Fig. 10.1). In this interpretation B is a safe set, while A is not. The goal is to compute the probability, denoted by
    $$\begin{aligned} p_{\lnot A}^{B}(x), \end{aligned}$$
    of all trajectories that start from a given initial state x and hit the set B without hitting the state set A.
    Fig. 10.1

    Obstacle avoidance reachability

  • Waypoint reachability (see Fig. 10.2). In this interpretation we are interested in computing the probability, denoted by
    $$\begin{aligned} p_{A}^{B}(x), \end{aligned}$$
    of all trajectories that hit B only after hitting A.
    Fig. 10.2

    Waypoint reachability

The connection between the two types of stochastic reachability is given by the formula
$$\begin{aligned} p_{\lnot A}^{B}(x)+p_{A}^{B}(x)= \varphi_{B}(x) \end{aligned}$$
where φ B is the reachability function for the target set B given by formula ( 7.3). Therefore, computations of the probabilities corresponding to the two types of reachability are equivalent. To have an easy notation, it is more convenient to work with the waypoint reachability, which will be called from now on simply state-constrained reachability.
Now we consider the executions (paths) of a stochastic hybrid process that starts in x=(q,z)∈X. When we investigate state-constrained reachability, we seek the probability that these trajectories visit A before visiting, eventually, B. Mathematically, this is the probability of
$$\begin{aligned} \bigl\{ \omega \big|x_{t}(\omega )\notin B,\forall t\leq T_{A} \bigr\} . \end{aligned}$$
Moreover, using the first hitting time T B of B, we are interested in computing
$$\begin{aligned} p_{A}^{B}(x)=\mathbb{P}_{x}[T_{A}<T_{B}]. \end{aligned}$$
(10.1)

Consequently, state-constrained reachability is related to some classical topics treated in the literature of Markov processes, like the first passage problem [109], excursion theory [100] and estimation of the equilibrium potential of the capacitor of the two sets [37]. These references provide theoretical characterisations for probabilities (10.1) for different classes of Markov processes. Here, the scope is not to survey all these characterisations but to identify the appropriate analytical solutions of this problem.

10.3 Mathematical Characterisations

The main scope of this section is to prove that state-constrained reachability probabilities can be characterised as solutions of certain boundary value problems expressed in terms of the infinitesimal generator of the given stochastic hybrid process.

We will use the concepts of excessive function and kernel operator V that can be introduced with respect to a Markov process M (see Chap.  2).

The following assumption is essential for the results of this section.

Assumption 10.1

Suppose that M is a transient Markov process, i.e., there exists a strictly positive Borel-measurable function q such that Vq is a bounded function.

Using [37], we have the following characterisation.

Proposition 10.1

State-constrained reachability probability \(p_{A}^{B}\) has the following properties:
  1. (i)

    \(0\leq p_{A}^{B}\leq 1\) a.s. on X,

     
  2. (ii)

    \(p_{A}^{B}=0\) a.s. on B and \(p_{A}^{B}=1\) on A,

     
  3. (iii)

    \(p_{A}^{B}\) is the potential of a signed measure ν such that the support of ν + is contained in A and the support of ν is contained in B.

     
We can write, in a more compact manner,
An inclusion–exclusion argument leads to the following formula:
$$\begin{aligned} p_{A}^{B}(x) = & \mathbb{P}_{x}(T_{A}<T_{B}) \\ =& P_{A}1(x)-P_{B}P_{A}1(x)+P_{A}P_{B}P_{A}1(x)- \cdots. \end{aligned}$$
We make the following notation for:
  • composition of the hitting operators corresponding to the target sets A and B
    $$\begin{aligned} V^{A\rightarrow B}:=P_{A}\circ P_{B}, \end{aligned}$$
    where provided that T A , T B <ζ (ζ is the lifetime of the process);
  • the probability of hitting A again after n excursions between A and B
    $$\begin{aligned} p_{n}:=\bigl(V^{A\rightarrow B}\bigr)^{n}\varphi_{A}, \end{aligned}$$
    where φ A is given by ( 7.3);
  • the probability of hitting A again after ‘infinitely many’ excursions between A and B
    $$\begin{aligned} \varGamma :=\sum_{n=0}^{\infty }p_{n}. \end{aligned}$$

Proposition 10.2

Then we have the following recurrence formula:
$$\begin{aligned} p_{A}^{B}=(I-P_{B})\varGamma \end{aligned}$$
where I:ℬ b (X)→ℬ b (X) is the identity operator.

Proof

Each p n is an excessive function, bounded by 1 and P B p n p n . Therefore,
$$\begin{aligned} p_{n}-P_{B}p_{n}\in [0,1]. \end{aligned}$$
Let us set T 0:=0 and T 1,T 2,T 3,… to be times of successive visits to A, then to B, then back to A and so on. Formally, these times are defined as
$$\begin{aligned} T_{1} :=&T_{A}, \\ T_{2} :=&T_{A}+T_{B}\circ \theta_{T_{A}}, \\ \vdots & \\ T_{2n+1} :=& T_{2n}+T_{A}\circ \theta_{T_{2n}}, \\ T_{2n+2} :=& T_{2n+1}+T_{B}\circ \theta_{T_{2n+1}}, \end{aligned}$$
where θ is the shift operator. An induction argument shows that
$$\begin{aligned} P_{T_{2n}}=(P_{A}P_{B})^{n},\quad n\in \mathbb{N}. \end{aligned}$$
Then it can be easily checked that
$$\begin{aligned} \mathbb{P}_{x}[T_{A}<T_{B},T_{2n+1}\leq L\leq T_{2n+2}]=p_{n}(x)-P_{B}p_{n}(x), \end{aligned}$$
where L is the last exit time from A, i.e.,
$$\begin{aligned} L=L_{A}=\sup \{t>0|x_{t}\in A\}. \end{aligned}$$
L is a.s. finite because usually we suppose that our process is transient, in the sense that if it enters a set then it must leave it also. □

Theorem 10.3

State-constrained reachability probability \(p_{A}^{B}\) solves the following boundary value problem: whereis the infinitesimal generator of the given stochastic hybrid process.

This is the main theorem about the characterisation of the state-constrained reachability. The theorem can be proved for Borel right processes that are stochastic hybrid processes. Stochastic hybrid processes have a continuous dynamics given by some diffusion processes and a discrete dynamics described by a Markov chain. Therefore, the proof is a consequence of the following two lemmas, which are instantiations of the theorem for Brownian motion and Markov chains. We have not found the proofs in any monograph of stochastic processes that treat first time passage problems, excursion theory for Markov processes—see for example [109], therefore we sketch these proofs in the following.

Lemma 10.4

Let us consider a (discrete time, discrete state) Markov chain (X t ) with the state space Γ and the one-step transition function p 1(x,y). Given two disjoint sets A,BΓ, then the state-constrained reachability probability \(p_{A}^{B} ( x ) \) is the solution of the boundary value problem

Lemma 10.5

For a discrete space Markov chain, it is known that its infinitesimal generator is given by

Proof

If xAB, we make the elementary remark that the first step away leads either to B and the event {T A <T B } fails to happen, or to A, in which case the event happens, or to another point yAB, in which case the event happens with probability P y [T A <T B ]. Therefore, we obtain
$$\begin{aligned} \mathbb{P}_{x}[T_{A}<T_{B}]=\sum _{y\in A}p_{1}(x,y)+\sum_{y\notin A\cup B}P_{y}[T_{A}<T_{B}]. \end{aligned}$$
Then for xAB, we obtain
$$\begin{aligned} p(x)=\sum_{y\in \varGamma }p_{1}(x,y)p(y). \end{aligned}$$
This ends the proof. □

Lemma 10.6

Let us consider W the standard d-dimensional Wiener process. Let A,B be two disjoint capacitable sets (see [51] for full definition) of nonzero capacity such that AB is closed. The reachability probability p(x) satisfies the Laplace problem
$$\begin{aligned} \nabla^{2}p(x)=0 \end{aligned}$$
on X−(AB) with the boundary condition

Proof

Let xX−(AB) and H be a ball of radius h and surface S in X−(AB) centred in x. Define the random variable
$$\begin{aligned} T=\inf \bigl\{ t\big|x_{t}(\omega )\in S\bigr\} . \end{aligned}$$
This has the property that
$$\begin{aligned} \mathbb{P}_{x}[T<\infty ]=1. \end{aligned}$$
Let
$$\begin{aligned} H_{i}:=\bigl\{ \big|W(i)-W(i-1)\big|\leq 2h\bigr\} . \end{aligned}$$
Then \(\mathbb{P}_{x}(A_{1})<1\) and \(\lim_{n\rightarrow \infty } \mathbb{P}_{x}[T>n]=0\). This results from the inequality
$$\begin{aligned} \mathbb{P}_{x}[T>n]\leq \mathbb{P}_{x}(A_{1} \cap \cdots \cap A_{n})=\mathbb{P}_{x}(A_{1})^{n}. \end{aligned}$$
We have
$$\begin{aligned} p(x)=\int_{y\in S}\mathbb{P}_{x}\bigl[T_{A}<T_{B}\big|W(T)=y \bigr]f(y)\,dS, \end{aligned}$$
where f(y)=1/|S| is the density function. This means that
$$\begin{aligned} p(x)=\int_{y\in S}p(y)/|S|\,dS. \end{aligned}$$
 □

Theorem 10.3 characterises the probabilities of the state-constrained reachability as the solutions for a Dirichlet boundary value problem (DBVP) associated to the generator of the underlying stochastic hybrid process. This generator is a second order elliptic integro-differential operator and it is known that, for this type of nonlocal operator, the value of the solutions for the DBVP has to be prescribed not only on the boundary of the domain but also in its whole complementary set [95]. Under standard hypotheses, the existence and uniqueness of the solutions for such equations can be proved. The solutions are called potential functions for the underlying Markov process and they play the same role as harmonic functions for the Laplace operator.

Dirichlet boundary value problems for such operators already have been addressed in the literature by different theories:
  • for a classical PDE approach using Sobolev spaces, see [95];

  • for a probabilistic approach using the operator semigroup, see [222];

  • for a viscosity solution approach, see [11].

For the verification problems defined in the context of stochastic hybrid systems, the DBVP defined in Theorem 10.3 will have to be solved only locally in the appropriate modes. In this way, the quite difficult boundary condition (that results from the definition of the infinitesimal generator) can be avoided. Then we consider that most numerical solutions available for the Dirichlet boundary value problem corresponding to second order elliptic partial differential operators can be extended in a satisfactory manner to solve our problem.

Without doubt, we need to consider the applications of state-constrained reachability in the framework of controlled stochastic hybrid systems. The control can be defined either
  • at the ‘low level’, i.e., continuous dynamics in the operating modes are controlled diffusions,

  • or, at the ‘decision level’, i.e., the jumping structure is governed by a decision process (usually in the form of a Markov decision process).

10.4 State-Constrained Reachability in a Development Setting

So far, state-constrained reachability has been defined and investigated in the abstract setting provided by the SHS models. We have shown that the reach probabilities represent the solution of an elliptic integro-differential equation. This problem can be efficiently solved in some cases, for example in the case where the equation can be reduced to an elliptic one. However, for complex systems, the integro-differential equation can be difficult to solve. In this case, the abstract model of SHS should be replaced with more manageable models. This can be done in many ways, like through approximations, functional abstractions, model reductions and so on. Based on existing research we propose a multilayer approach for describing a complex hybrid system with stochastic behaviour. The same system is described using a set of different models, each one constructed at a different level of abstraction. The models can be related by abstraction or refinement maps. This approach makes it possible for us to solve a specific problem for the given system at the right level of abstraction.

In this setting, we search for more manageable solutions for state-constrained reachability analysis.

10.4.1 Multilayer Stochastic Hybrid Systems

In this subsection, we introduce a multilayer model for stochastic hybrid systems. This is inspired by the viewpoints model from software engineering. There, a system is modularly developed from different perspectives. Each perspective provides a model of the system, called a viewpoint. Then the viewpoints need to be consistently unified to provide the overall system description. This methodology corresponds to a horizontal development philosophy. In this section, we proposed a vertical (or hierarchical) viewpoint approach. In this approach, the system is described by viewpoints constructed on top of each other, each one providing a partial model at a different abstraction level.

Mathematically, at the level j, a viewpoint is an SHS and all its elements (discrete/continuous states, trajectories, jumping times, etc.) carry the superscript j.
At the level 0, the corresponding viewpoint is an SHS, The viewpoint H j is related to viewpoint of level (j−1), by a pair of maps (Φ,Ψ), where
$$\begin{aligned} \varPhi :X^{j-1}\rightarrow X^{j} \end{aligned}$$
relates the states and
$$\begin{aligned} \varPsi :\varOmega^{j}\rightarrow \varOmega^{j-1} \end{aligned}$$
relates the trajectories. In relational algebra, such a pair is called a twisted relation.
The first map Φ is a surjective map that describes how H j simulates H j−1 by means of the following property: where ℒ j−1 and ℒ j represent the infinitesimal generators for H j−1 and, respectively, for H j . Relation (10.4) can be given as well in terms of transition probabilities or operator semigroups as follows:

The second map Ψ can be defined in various ways, adding flexibility to the viewpoint modelling. For example, Ψ can be generically defined by replacing a certain discrete transition in the dynamics of the viewpoint j with a set of trajectories of the hybrid system described by the viewpoint (j−1). In particular, a single discrete transition can be mapped into the trajectories of a continuous dynamical system. In this way, a viewpoint described by a discrete transition system can be related to another viewpoint described by a hybrid system.

To illustrate the viewpoint approach let us consider a flying aircraft. At the lowest level of detail, its dynamics can be accurately described by a switching diffusion process. In this model of SHS, the hybrid trajectories are continuous and are piecewise diffusion paths. At the most abstract level, the flight can be modelled as a probabilistic timed automaton that is a sort of discrete transition system. In this viewpoint, the aircraft lies in a state for a certain time and then with a given rate it makes a discrete transition. An intermediate viewpoint, between the continuous and discrete, is modelled by a stochastic hybrid system. In the intermediate viewpoint, a discrete transition from the discrete viewpoint is refined into a continuous mode and certain continuous paths from the continuous viewpoint are abstracted away into discrete transitions. Of course, in the stochastic setting there can be many subtle cases, like rates of discrete transitions that depend on the diffusion evolution in a mode.

The utility of a multilayer model consists of the possibility of solving categories of problems at different levels/viewpoints. For example, the stability problems are more efficiently studied in a continuous viewpoint (corresponding to the lowest abstraction level). A safety verification problem can be formally tackled in a discrete viewpoint (corresponding to the highest abstraction level). Many control problems can be suitably studied in the hybrid viewpoint (corresponding to an intermediate level of abstraction).

State-constrained reachability has been defined in a viewpoint corresponding to an intermediate discrete/continuous level of abstraction. Since in the previous sections it has been proved that the approach in this viewpoint can lead to problems with integro-differential operators that can be difficult to solve, it is worthy to try to study the problem at other levels of abstractions/viewpoints. In a discrete viewpoint, one can hope to use probabilistic model checking techniques. In a continuous viewpoint, the well-developed mathematical apparatus of diffusion processes is becoming available with specific benefits.

In order to make the state-constrained reachability approach practical, a further refinement of the mathematical model is necessary. This refinement takes into account the Euclidean space, in which processes evolve. We call such processes spatial SHS. An SHS is called spatial if some of its parameters form together a subspace of the Euclidean spaces \(\mathbb{R}\), \(\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\).

A multilayer model can be fruitfully conjoined with spatial models for designing or for improving control. Suppose that a system that is an n-dimensional SHS is a spatial process in a higher level of abstraction obtained by ignoring the nonspatial parameters. With this, one can obtain a viewpoint in which the system is modelled as a spatial Wiener process. The state-constrained stochastic reachability problem becomes more tractable in this viewpoint. If the reach probability is high and causes of this fact can be detected, then in the original viewpoint a control strategy can be considered that minimises the reach probability. For example, in the case of an air traffic control system, the spatial viewpoint can indicate that the collision probability becomes higher in areas of dense traffic with no coordination. Then a control strategy will design a pathway for the aircraft that avoids the high traffic density regions.

Continuous Viewpoint

The following two examples are inspired from [109].

Example 10.7

Let A be the sphere with radius ε and the centre at the origin of \(\mathbb{R}^{3}\). Let us consider the continuous viewpoint of an SHS modelled as a three-dimensional Wiener process W with W 0=x 0A. The problem is to compute the probability that W visits A. Let B be a sphere with the radius R and centre at the origin, where R is much larger than ε (εR). We have to look for a solution of Laplace’s equation in spherical polar coordinates:
$$\begin{aligned} \frac{\partial }{\partial r}\biggl(r^{2}\frac{\partial p}{\partial r}\biggr)+ \frac{1}{\sin \theta }\frac{\partial }{\partial \theta }\biggl(\sin \theta \frac{\partial p}{\partial \theta }\biggr)+ \frac{1}{\sin^{2}\phi }\frac{\partial^{2}p}{\partial \phi^{2}}=0, \end{aligned}$$
(10.5)
subject to boundary conditions Solutions for Eq. (10.5) with spherical symmetry have the form
$$\begin{aligned} p(x)=\frac{c_{1}}{r}+c_{2}\quad \mbox{if }x=(r,\theta ,\phi ). \end{aligned}$$
Using the boundary conditions, the following solution can be obtained [109]:
$$\begin{aligned} p_{R}(x)=\frac{r^{-1}-R^{-1}}{\varepsilon^{-1}-R^{-1}}. \end{aligned}$$
Making R→∞, we get
$$\begin{aligned} p_{R}(x)\rightarrow \mathbb{P}(T_{A}<\infty )= \frac{\varepsilon }{r},\quad r>\varepsilon . \end{aligned}$$

Example 10.8

Let us consider a two-dimensional Wiener process W with W 0=x 0, which can be thought of as another spatial viewpoint for an SHS. Again we use the polar coordinates (r,θ) and suppose that W evolves in the set
$$\begin{aligned} -\pi <-\alpha \leq \theta \leq \alpha <\pi T. \end{aligned}$$
If A is the line θ=α and B is the line θ=−α, we may ask for the probability that W reaches A before B. We consider now the planar Laplace equation in polar coordinates
$$\begin{aligned} \frac{1}{r}\frac{\partial }{\partial r}\biggl(r\frac{\partial p}{\partial r}\biggr)+\frac{1}{r^{2}}\frac{\partial^{2}p}{\partial \theta }=0 \end{aligned}$$
subject to the boundary conditions
$$\begin{aligned} p=1\quad \mbox{on }\theta =\alpha ;\qquad p=0\quad \mbox{on }\theta =-\alpha . \end{aligned}$$
It can be checked [109] that the function
$$\begin{aligned} p=\frac{\theta +\alpha }{2\alpha },\quad -\alpha \leq \theta \leq \alpha , \end{aligned}$$
is the required solution and so
$$\begin{aligned} p(x_{1},x_{2})=\frac{1}{2\alpha }\biggl(\alpha + \tan^{-1}\frac{x_{2}}{x_{1}}\biggr). \end{aligned}$$

Discrete Viewpoint

Example 10.9

Let us consider a discrete time Markov chain that can be thought of as a discrete viewpoint for an SHS. In the literature, the number of transitions (time) required before the state will move from i to j for the first time is referred to as the first passage time. It is possible to calculate the average (or expected) number of transitions for the passage from state i to j. Let m ij be the expected first passage time (number of transitions from state i to j). Moving from i to j in exact one transition has probability p ij . If this were not the case then the state would change to k (≠ j). The probability of moving from i to k (for all kj) would be the sum of all the probabilities p ik for all k (≠ j), i.e., ∑ kj p ik . We now need to move from i to j. This may require many transitions and, based on the Markov property, its probability is given by ∑ kj p ik m kj . Then
$$\begin{aligned} m_{ij}=p_{ij}+\sum_{k\neq j} p_{ik}+\sum_{k\neq j} p_{ik}m_{kj} \end{aligned}$$
or, finally,
$$\begin{aligned} m_{ij}=1+\sum_{k\neq j}p_{ik}m_{kj}. \end{aligned}$$

10.4.2 Electrostatic Analogy

It is known that the theory of Markov processes is intimately connected with mathematical physics (see [33]). The solutions of the DBVP from Theorem 10.3 can be characterised as certain potential functions defined on the underlying Markov process state space. These describe the ‘probability distributions’ of a charge that is distributed on the state space where the set A (the obstacle) produces a repulsive force and the set B (the target) yields an attractive force. In potential theory, the physical interpretation of state-constrained reachability probability considered in this chapter is related to the condenser problem (see [74]). This is described as follows: suppose there are given two disjoint compact conductors A,B in the Euclidean space \(\mathbb{R}^{3}\) of positive capacity [51]. A positive electric unit charge placed on A and a negative unit charge on B, both allowed to distribute freely on the respective sets, will find a state of equilibrium, which is characterised on one hand by minimal energy and on the other hand by constant potential on A and on B (possibly taking out exceptional sets of zero capacity).

10.5 Further Generalisations: Randomised Targets

In this section, we investigate how to define the notion of the state-constrained reachability when the ‘final goal’ is replaced by a ‘random event’ that takes place at a random time. The objective is to compute the probability of those trajectories that visit a set before ‘something’ random happens.

Leveraging State-Constrained Reachability

State-constrained reachability analysis seeks to obtain estimations for \(\mathbb{P}_{x}[T_{A}<T_{B}]\), i.e., to find the probability of visiting B after the process has visited A. As we have seen in previous sections, this problem can be characterised as a boundary value problem. In practice, it may happen that the sets A and B are not explicitly given. These sets can be characterised as:
  • level sets for some given functions,

  • sets of states that validate some logical formulae,

  • metastable sets for the given process.

Sometimes, for the computation of state-constrained reachability probabilities, more information is needed about at least one of these sets. Often, the available information is about the set B that can be either:
  • the boundary of one mode of the stochastic hybrid process,

  • a cemetery set where the process is trapped,

  • a set that is reached according to a state-dependent rate and so on.

Moreover, in the expression of the state-constrained reachability probability, we may replace the hitting time T B of B with a suitable random time that could be, for example:
  • a discrete transition time from one mode to another of the stochastic hybrid process,

  • a time of the apparition of a certain event,

  • a time defined by the until operator in a suitable continuous stochastic logic associated to our hybrid Markov process.

Then the probabilities that should be computed are
$$\begin{aligned} \mathbb{P}_{x}[T_{A}<T], \end{aligned}$$
where T is a random time that will be defined properly in the remainder of this section.
The idea of a moving target is illustrated in Fig. 10.3.
Fig. 10.3

Moving target

Randomised Stopping Times

We enlarge the space of stopping times with the so-called randomised stopping times [15]. A randomised stopping time T is defined to be a map
$$\begin{aligned} T:\varOmega \times [0,1]\rightarrow [0,\infty ] \end{aligned}$$
such that T is a stopping time with respect to the σ-algebras Open image in new window , where ℬ1 represents the Borel σ-algebra of the interval [0,1]. It is required that for every ωΩ, T(ω,⋅) is nondecreasing and left continuous on [0,1].
If T is a randomised stopping time, then T(⋅,a) is an ordinary stopping time. A randomised stopping time T can be characterised by a stopping time measure (ω-distribution) K induced by T. K is defined as the map provided that K(ω,⋅) is a measure on ℬ[0,∞]. K(ω,⋅) is a version of the conditional distribution of T given the entire trajectory ω.
Using the measure K, one can get back the randomised stopping time T by
$$\begin{aligned} T(\omega ,a)=\inf \bigl\{ t:K\bigl(\omega ,[0,t]\bigr)\geq a\bigr\} . \end{aligned}$$
(10.7)
Moreover, one can define a stopping time measure as an independent mathematical object as follows.
A map is called stopping time measure if
  1. (i)

    K(ω,⋅) is a probability measure for each ωΩ;

     
  2. (ii)

    K(⋅,[0,t]) is Open image in new window -measurable for each t.

     
Then T can be defined by (10.7). Therefore, there exists a complete correspondence between the notions of stopping time measure and randomised stopping time.
Usually, the following notation is in use:
$$\begin{aligned} K_{t}:=K\bigl((t,\infty ]\bigr)=K\bigl(\cdot ,(t,\infty ]\bigr). \end{aligned}$$
If \(\mathbb{P}\) is the underlying probability on Open image in new window and m 1 is the Lebesgue measure on the unit interval [0,1], then K t is a version of the conditional probability of {T>t} using the probability \(\mathbb{P}\times m_{1}\) with respect to the σ-algebra Open image in new window .

Markovian Randomised Stopping Times

Working with randomised stopping times might be a difficult task, since the Markov property is still true only with respect to the nonrandomised stopping times (strong Markov property). To keep things simpler, we consider only the Markovian randomised stopping times. Examples of this kind of random time are:
  • Markov killing times: T is a Markov killing time for M if under \(\mathbb{P}_{x}\) the killed process (x t |0≤tT) is Markovian with the sub-Markovian semigroup (Γ t ) t≥0:
    $$\begin{aligned} \varGamma_{t}f(x):=\mathbb{P}_{x}\bigl[f(x_{t})1_{(t<T)} \bigr]. \end{aligned}$$
    In addition, we assume that Γ t f is ℬ-measurable for all t>0 and all positive ℬ-measurable f.
  • Terminal times: An Open image in new window -stopping time \(T:\varOmega \rightarrow \mathbb{R}_{+}\) is called a terminal time if
    $$\begin{aligned} T=t+T\circ \theta_{t} \end{aligned}$$
    (10.8)
    identically on [t<T].
Clearly, relation (10.8) expresses the memoryless property of a terminal time,
$$\begin{aligned} T(\omega )-t=T(\theta_{t}\omega ), \end{aligned}$$
i.e., the value of T on a path ω after time t has elapsed is equal to the value of T on the same path shifted/translated with the time t.
Examples of killing times are:
  • the random time T r with the stopping measure K t =e rt P t ;

  • ∞=lim r→0 T r ;

  • the first entrance/last exit time of a suitable subset B of the state space X;

  • the random time obtained by killing the process at state-dependent rate k(x t ).

A finite fixed time T is not a Markov killing time unless the process is set up as a space–time process and then T becomes a hitting time. As shown by the example of last exit times, the killing times may not be stopping times of the process, in comparison with the terminal times, which necessarily should be stopping times.

The most common examples of terminal times are provided by the hitting times of measurable subsets of state space X. The jumping times in the definition of a stochastic hybrid process are terminal times.

Multiplicative Functionals

A common methodology to obtain randomised stopping times is by using multiplicative functionals. These functionals have a long history in the theory of Markov processes and they have been employed mostly to describe transformations of the trajectories for these processes.

The seminal work about properties of trajectories of a Markov process in connection with multiplicative functionals belongs to E.B. Dynkin [77]. Under some regularity hypotheses, Dynkin proved that transformed processes can be defined such that their trajectories are ‘restrictions’ of the trajectories of the initial process. In [154], Kunita and Watanabe showed that the transformations of a Markov process governed by multiplicative functionals, whose expectations are dominated by 1, preserve some regularity properties of the initial process. Such properties are: (strong) Markovianity, right continuity of the trajectories and quasi-left continuity of the stopping time sequences.

A systematic study of the multiplicative functionals of Markov processes was done in [33] and later in [208]. A stopping time measure α will be called multiplicative functional if for every s,t≥0,
$$\begin{aligned} \alpha_{t+s}=\alpha_{t}(\alpha_{s}\circ \theta_{t})\quad \mbox{a.s.} \end{aligned}$$
(10.9)
We assume that α is an exact multiplicative functional [33]. The property (10.9) ensures that the randomised stopping time generated by (α t ) has the memoryless property.
The study of stochastic reachability when there are constraints regarding:
  • the target goal, which may have its own dynamics (as a set or a point mass), or may represent the boundary of a safe set,

  • the time horizon that could be a randomised stopping time,

represents ongoing research. Partial answers of this problem will be given in a subsequent volume to this book.

10.6 Some Remarks

In this chapter, we extended the so-called constrained reachability problem from the probabilistic discrete case to stochastic hybrid systems. Then we mathematically defined this problem and we obtained the reach probabilities as solutions of a boundary value problem. These characterisations are useful in stochastic control and in probabilistic path planning. In this chapter, the stochastic reachability problem for stochastic hybrid systems has been specialised by introducing constraints relative to the state space. We proved that state-constrained stochastic reachability is solvable. Moreover, we described how the concept could be leveraged so that it could capture more constraints regarding time and space. Numerical solutions for this problem depend fairly on the underlying model of stochastic hybrid systems. It is clear that the simplest way to deal with this problem is to work either at the bottom level (using only diffusions) or the higher level (using only Markov chains).

References

  1. 11.
    Barles, C., Chasseigne, E., Imbert, C.: On the Dirichlet problem for second-order elliptic integro-differential equations. Preprint (2007) Google Scholar
  2. 14.
    Baier, C., Haverkort, B.R., Hermanns, H., Katoen, J.-P.: Reachability in continuous-time Markov reward decision processes. In: Logic and Automata: History and Perspectives, pp. 53–71 (2007) Google Scholar
  3. 15.
    Baxter, J.R., Chacon, R.V.: Compactness of stopping times. Z. Wahrscheinlichkeitstheor. Verw. Geb. 40(3), 169–181 (1977) CrossRefMATHMathSciNetGoogle Scholar
  4. 33.
    Blumenthal, R.M., Getoor, R.K.: Markov Processes and Potential Theory. Academic Press, New York (1968) MATHGoogle Scholar
  5. 37.
    Bovier, A.: Metastability. Lecture Notes in Mathematics, vol. 1970. Springer, Berlin (2009) Google Scholar
  6. 51.
    Choquet, G.: Theory of capacities. Ann. Inst. Fourier 5, 131–291 (1953) CrossRefMathSciNetGoogle Scholar
  7. 74.
    Doob, J.L.: Classical Potential Theory and Its Probabilistic Counterpart. Springer, Berlin (1984) CrossRefMATHGoogle Scholar
  8. 77.
    Dynkin, E.B.: Markov Processes I. Springer, Berlin (1965) CrossRefMATHGoogle Scholar
  9. 95.
    Garroni, M.G., Menaldi, J.L.: Second Order Elliptic Integro-Differential Problems. Chapman & Hall/CRC Press, London/Boca Raton (2002) CrossRefMATHGoogle Scholar
  10. 100.
    Getoor, R.K.: Excursions of a Markov process. Ann. Probab. 7(2), 244–266 (1979) CrossRefMATHMathSciNetGoogle Scholar
  11. 109.
    Grimmett, G., Stirzaker, D.: Probability and Random Processes. Oxford University Press, London (1982) MATHGoogle Scholar
  12. 154.
    Kunita, H., Watanabe, T.: Notes on transformations of Markov processes connected with multiplicative functionals. Mem. Fac. Sci., Kyushu Univ., Ser. A, Math. 17(2), 181–191 (1963) MATHMathSciNetGoogle Scholar
  13. 208.
    Sharpe, M.: General Theory of Markov Processes. Academic Press, San Diego (1988) MATHGoogle Scholar
  14. 222.
    Taira, K.: Boundary value problems for elliptic integro-differential operator. Math. Z. 222, 305–327 (1996) CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2012

Authors and Affiliations

  • Luminita Manuela Bujorianu
    • 1
  1. 1.School of MathematicsUniversity of ManchesterManchesterUK

Personalised recommendations