Runtime Monitoring for Markov Decision Processes

We investigate the problem of monitoring partially observable systems with nondeterministic and probabilistic dynamics. In such systems, every state may be associated with a risk, e.g., the probability of an imminent crash. During runtime, we obtain partial information about the system state in form of observations. The monitor uses this information to estimate the risk of the (unobservable) current system state. Our results are threefold. First, we show that extensions of state estimation approaches do not scale due the combination of nondeterminism and probabilities. While convex hull algorithms improve the practical runtime, they do not prevent an exponential memory blowup. Second, we present a tractable algorithm based on model checking conditional reachability probabilities. Third, we provide prototypical implementations and manifest the applicability of our algorithms to a range of benchmarks. The results highlight the possibilities and boundaries of our novel algorithms.


Introduction
Runtime assurance is essential in deployment of safety-critical (cyber-physical) systems [44,12,29,48]. Monitors observe system behavior and indicate when the system is at risk to violate system specifications. A critical aspect in developing reliable monitors is their ability to handle noisy or missing data. In cyberphysical systems, monitors observe the system state via sensors, i.e., sensors are an interface between the system and the monitor. A monitor has to base its decision solely on the obtained sensor output. These sensors are not perfect, and not every aspect of a system state can be measured.
This paper considers a model-based approach to the construction of monitors for systems with imprecise sensors. Consider Fig. 1(b). We assume a model for the environment together with the controller. Typically, such a model contains both nondeterministic and probabilistic behavior, and thus describes a Markov decision process (MDP): In particular, the sensor is a stochastic process [54] that translates the environment state into an observation. For example, this could be a perception module on a plane that during landing estimates the movements of an on-ground vehicle, as depicted in Fig. 1(a). Due to lack of precise data, the vehicle movements itself may be most accurately described using nondeterminism.
We are interested in the associated state risk of the current system state. The state risk may encode, e.g., the probability that the plane will crash with the vehicle within a given number of steps, or the expected time until reaching the other side of the runway. The challenge is that the monitor cannot directly observe the current system state. Instead, the monitor must infer from a trace of observations the current state risk. This cannot be done perfectly as the system state cannot be inferred precisely. Rather, we want a sound, conservative estimate of the system state. More concretely, for a fixed resolution of the nondeterminism, the trace risk is the weighted sum over the probability of being in a state having observed the trace, times the risk imposed by this state. The monitoring problem is to decide whether for any possible scheduler resolving the nondeterminism the trace risk of a given trace exceeds a threshold.
Monitoring of systems that contain either only probabilistic or only nondeterministic behavior is typically based on filtering. Intuitively, the monitor then estimates the current system states based on the model. For purely nondeterministic systems (without probabilities) a set of states needs to be tracked, and purely probabilistic systems (without nondeterminism) require tracking a distribution over states. This tracking is rather efficient. For systems that contain both probabilistic and nondeterministic behavior, filtering is more challenging. In particular, we show that filtering on MDPs results in an exponential memory blowup as the monitor must track sets of distributions. We show that a reduction based on the geometric interpretation of these distributions is essential for practical performance, but cannot avoid the worst-case exponential blowup. As a tractable alternative to filtering, we rephrase the monitoring problem as the computation of conditional reachability probabilities [9]. More precisely, we unroll and transform the given MDP, and then model check this MDP. This alternative approach yields a polynomial-time algorithm. Indeed, our experiments show the feasibility of computing the risk by computing conditional probabilities. We also show benchmarks on which filtering is a competitive option.
Contribution and outline. This paper presents the first runtime monitoring for systems that can be adequately abstracted by a combination of probabilities and nondeterminism and where the system state is partially observable. We describe the use case, show that typical filtering approaches in general fail to deal with this setting, and show that a tractable alternative solution exists. In Sec. 3, we investigate forward filtering, used to estimate the possible system states in partially observable settings. We show that this approach is tractable for systems that have probabilistic or nondeterministic uncertainty, but not for systems that have both. To alleviate the blowup, Sec. 4 discusses an (often) efficacious pruning strategy and its limitations. In Sec. 5    and reveals both strengths and weaknesses of both algorithms. We start with a motivating example and review related work at the end of the paper.
Motivating example. Consider a scenario where an autonomous airplane is in its final approach, i.e., lined up with a designated runway and descending for landing, see Figure 1(a). On the ground, close to the runway, maintenance vehicles may cross the runway. The airplane tracks the movements of these vehicles and has to decide, depending on the movements of the vehicles, whether to abort the landing. To simplify matters, assume that the airplane (P) is tracking the movement of one vehicle (V) that is about to cross the runway. Let us further assume that P tracks V using a perception module that can only determine the position of the vehicle with a certain accuracy [33], i.e., for every position of V, the perception module reports a noisy variant of the position of V. However, it is important to realize that the plane obtains a sequence of these measurements. Figure 1 illustrates the dynamics of the scenario. The world model describing the movements of V and P is given in Figure 1(c), where D 2 , D 1 , and D 0 define how close P is to the runway, and R, M , and L define the position of V. Depending on what information V perceives about P, given by the atomic proposition {(p)rogress}, and what commands it receives {(w)ait}, it may or may not cross the runway. The perception module receives the information about the state of the world and reports with a certain accuracy (given as a probability) the position of V. The (simple) model of the perception module is given in Figure 1(d). For example, if P is in zone D 2 and V is in R then there is high chance that the perception module returns that V is on the runway. The probability of incorrectly detecting V's position reduces significantly when P is in D 0 .
A monitor responsible for making the decision to land or to perform a goaround based on the information computed by the perception module, must take into consideration the accuracy of this returned information. For example, if the sequence of sensor readings passed to the monitor is the sequence τ = R o · R o · M o , and each state is mapped to a certain risk, then how risky is it to land after seeing τ ? For example, if with high probability the world is in state M, D 0 , a very risky state, then the plane should go around. In the paper, we address the question of computing the risk based on this observation sequence. We will use this example as our running example.

Monitoring under Imprecise Sensors
In this section, we formalize the problem of monitoring with imprecise sensors when both the world and sensor models are given by MDPs. We start with a recap of MDPs, define the monitoring problem for MDPs, and finally show how the dynamics of the system under inspection can be modeled by an MDP defined by the composition of two MDPs of the sensors and world model of the system. Markov decision processes. For a countable set X, let Distr(X) ⊂ (X → [0, 1]) define all distributions over X, i.e., for d ∈ Distr(X) it holds that Σ x∈X d(x) = 1. For µ ∈ Distr(X), let the support of µ be defined by supp(µ) := {x | µ(x) > 0}. We call a distribution µ Dirac, if |supp(µ)| = 1.
Definition 1 (Markov decision process). A Markov decision process is a tuple M = S, ι, Act, P, Z, obs , where S is a finite set of states, ι ∈ Distr(S) is an initial distribution, Act is a finite set of actions, P : S × Act → Distr(S) is a partial transition function, Z is a finite set of observations, and obs : S → Distr(Z) is a observation function.
Remark 1. The observation function can also be defined as a state-action observation function obs : S × Act → Distr(Z). MDPs with state-action observation function can be easily transformed into equivalent MDPs with a state observation function using auxiliary states [19]. Throughout the paper we use state-action observations to keep (sensor) models concise.
We denote AvAct(s) = {α | P (s, α) = ⊥}. W.l.o.g., |AvAct(s)| ≥ 1. If all distributions in M are Dirac, we refer to M as a Kripke structure (KS). If |AvAct(s)| = 1 for all s ∈ S, we refer to M as a Markov chain (MC). When Z = S, we refer to M as fully observable and omit Z and obs from its definition. A finite path in an MDP M is a sequence π = s 0 a 0 s 1 . . . s n ∈ S × Act × S * such that for every 0 ≤ i < n it holds that P (s i , a i )(s i+1 ) > 0 and ι(s 0 ) > 0. We denote the set of finite paths of M by Π M . The length of the path is given by the number of actions along the path. The set Π n M for some n ∈ N denotes the set of finite paths of length n. We use π ↓ to denote the last state in π. We omit M whenever it is clear from the context. A trace is a sequence of observations τ = z 0 . . . z n ∈ Z + . Every path induces a distribution over traces. As standard, we need resolve any nondeterminism by means of a scheduler.
We use Sched(M) to denote the set of schedulers. For a fixed scheduler σ ∈ Sched(M), the probability Pr σ (π) of a path π (under the scheduler σ) is the product of the transition probabilities in the induced Markov chain. For more details we refer the reader to [8].
Formal Problem Statement. Our goal is to determine the risk that the system is exposed to having observed a trace τ ∈ Z + . Let r : S → R ≥0 map states in M to some risk in R ≥0 . We call r a state-risk function for M. This function maps to the risk that is associated with being in every state. For example, in our experiments, we flexibly define the state risk using the (expected reward extension of the) temporal logic PCTL [8], to define the probability of reaching a fail state. E.g., we can define risk as the probability to crash within H steps. The use of expected rewards allows for even more flexible definitions.
Intuitively, to compute this risk of the system we need to determine the current system state having observed τ considering the probabilistic and nondeterministic context. Towards this, we formalize the (conditional) probabilities and risks of paths and traces. Let Pr σ (π | τ ) define the probability of a path π, under a scheduler σ, having observed τ . Since a scheduler may define many paths that induce the observation trace τ , we are interested in the weighted risk over all paths, i.e., π∈Π |τ | M Pr σ (π | τ ) · r(π ↓ ). The monitoring problem for MDPs then conservatively over-approximate the risk of a trace by assuming an adversarial scheduler, that is, by taking the supremum risk estimate over all schedulers 1 .
We call the special variant with λ = 0 the qualitative monitoring problem. The problems are (almost) equivalent on Kripke structures, where considering a single path to an adequate state suffices. Details are given in the appendix. In the next sections we present two types of algorithms for the monitoring problem. The first algorithm is based on the widespread (forward) filtering approach [43]. The second is new algorithm based on model checking conditional probabilities. While filtering approaches are efficacious in a purely nondeterministic or a purely probabilistic setting, it does not scale on models such as MDPs that are both probabilistic and nondeterministic. In those models, model checking provides a tractable alternative. However, we first connect the problem statement more formally to our motivating example.
An MDP defining the system dynamics. Before we solve the monitoring problem for MDPs, we show how the weighted risk for a system given by a world and sensor model can be formalized as a monitoring problem for MDPs. To this end, we define the dynamics of the world and sensors that we use as basis for our monitor as the following joint MDP.
For a fully observable world MDP E = S E , ι E , Act E , P E and a sensor MDP S = S S , ι S , S E , P S , Z, obs , where obs is state-action based, the inspected system is defined by an MDP E, S = S J , ι J , Act E , P J , Z, obs J being the synchronous composition of E and S: such that for all u, s ∈ S J and α ∈ Act E ; In Figure 2 we illustrate a run of E, S for the world and sensor MDPs presented in Figure 1. We particularly show the observations of the joint MDP 2 for conciseness we assume throughout the paper that 0

Forward Filtering for State Estimation
We start by showing why standard forward filtering does not scale well on MDPs. We briefly show how filtering can be used to solve the monitoring problem for purely nondeterministic systems (Kripke structures) or purely probabilistic systems (Markov Chains). Then, we show why for MDPs, the forward filtering needs to manage, although finite but an exponential set of distributions. In Section 4 we present a new improved variant of forward filtering for MDPs based on filtering with vertices of the convex hull. In Section 5 we present a new polynomial-time model checking-based algorithm for solving the problem.

State estimators for Kripke structures.
For Kripke structures, we maintain a set of possible states that agree with the observed trace. This set of states is inductively characterized by the function est KS : Z + → 2 S which we define formally below. For an observation trace τ , est KS (τ ) defines the set of states that can be reached with positive probability. This set can be computed by a forward state traversal [31]. To illustrate how est KS (τ ) is computed for τ , consider the underlying Kripke structure of the inspected system E, S for our running example in Figure 1 (to make this a Kripke structure, we remove the probabilities). Consider further the observation trace Since E, S has only one initial state R, D 2 , sense and R o is observable with a positive probability in this state, Definition 3 (KS state estimator). For KS = S, ι, Act, P, Z, obs , the state estimation function est KS : Z + → 2 S is defined as For a Kripke structure KS and a given trace τ , the monitoring problem can be solved by computing est KS (τ ), using [31] and Lemma 1.
For a Kripke stucture KS = S, ι, Act, P, Z, obs , a trace τ ∈ Z + , and a state-risk function r : A more detailed proof can be found in the appendix.
The time and space requirements follow directly from the inductive definition of est KS which resembles solving a forward state traversal problem in automata [31]. In particular, the algorithm allows updating the result after extending τ in O(|P |).

State estimators for Markov chains.
For Markov chains, in addition to tracking the potential reachable system states, we also need to take the transition probabilities into account. When a system is (observation-)deterministic, we can adapt the notion of beliefs, similar to RVSE [52], and similar to the construction of belief MDPs for partially observable MDPs, cf. [51]: Definition 4 (Belief ). For an MDP M with a set of states S, a belief bel is a distribution in Distr(S).
In the remainder of the paper, we will denote the function S → {0} by 0 and the set Distr(S) ∪ {0} by Bel. A state estimator based on Bel is then defined as follows [49,55,52] 3 : Definition 5 (MC state estimator). For MC = S, ι, Act, P, Z, obs , a trace τ ∈ Z + the state estimation function est MC : Z + → Bel is defined as To illustrate how est MC is computed, consider again our system in Figure Finally, from the later two states, when observing L o , the states M, D 0 and L, D 0 can be reached with Notice that although the state R, D 0 can be reached from R, D 1 , the probability of being in this state is 0 since observing L o in this state is obs( R, D 0 )(L o ) = 0.
Lemma 3. For a Markov chain MC = S, ι, Act, P, Z, obs , a trace τ ∈ Z + , and a state-risk function r : Computing R r (τ ) can be done in time O(|τ | · |S| · |P |) , and using |S| many rational numbers. The size of the rationals 5 may grow linearly in τ .
Proof sketch. Since the system is deterministic, there is a unique scheduler The complexity follows from the inductive definition of est MC that requires in each inductive step to iterate over all transitions of the system and maintain a belief over the states of the system.

State estimators for Markov decision processes.
In an MDP, we have to account for every possible resolution of nondeterminism, which means that a belief can evolve into a set of beliefs: Definition 6 (MDP state estimator). For an MDP M = S, ι, Act, P, Z, obs , a trace τ ∈ Z + , and a state-risk function r : S → R ≥0 , the state estimation function est MDP : Z + → 2 Bel is defined as and where bel ∈ est up MDP (bel, z) if there exists ς bel : S → Distr(Act) such that: The definition conservatively extends both Def. 3 and Def. 5. Furthermore, we remark that we do not restrict how the nondeterminism is resolved: any distribution over actions can be chosen, and the distributions may be different for different traces. Consider our system in Figure 1. Theorem 1. For an MDP M = S, ι, Act, P, Z, obs , a trace τ ∈ Z + , and a staterisk function r : S → R ≥0 , it holds that R r (τ ) = sup bel∈est MDP (τ ) s∈S bel(s) · r(s).
Proof sketch. For a given trace τ , each (history-dependent, randomizing) scheduler induces a belief over the states of the Markov chain induced by the scheduler. Also, each belief in est MDP (τ ) corresponds to a fixed scheduler, namely that one used to compute the belief recursively (i.e., an arbitrary randomizing memoryless scheduler for every time step). Once a scheduler σ and its corresponding belief bel is fixed, or vice versa, we can show using induction over the length of τ that π∈Π |τ | M Pr σ (π | τ ) · r(π ↓ ) = s∈S bel(s) · r(s).

Convex Hull-based Forward Filtering
In this section, we show that we can use a finite representation for est MDP (τ ), but that this representation is exponentially large for some MDPs.

Properties of est MDP (τ ).
First, observe that 0 never maximises the risk. Furthermore, est up MDP (0, z) = {0}. We can thus w.l.o.g. assume that 0 ∈ est MDP (τ ). Second, observe that We can interpret a belief bel ∈ Bel as point in (a bounded subset of) R (|S|−1) . We are in particular interested in convex sets of beliefs. A set B ⊆ Bel is convex  Example 1. Consider Fig. 3(a). All observation are Dirac, and only states s 2 and s 4 have observation z 1 . The beliefs having observed z 0 z 0 are distributions over s 1 , s 3 , and can thus be depicted in a one-dimensional simplex. In particular, we have V(est MDP (z 0 z 0 )) = {{s 1 → 1}, {s 1 → 3 /4, s 3 → 1 /4}}, as depicted in Fig. 3(b). The six beliefs having observed z 0 z 0 z 0 are distributions over s 0 , s 1 , s 3 , depicted in Fig. 3(c). Five out of six beliefs are vertices. The belief having observed z 0 z 0 z 1 is in Fig. 3(d).
Remark 2. Observe that we illustrate the beliefs over only the states est KS (τ ). We therefore call |est KS (τ )| the dimension of est MDP (τ ).
From the fundamental theorem of linear programming [46,Ch. 7] it immediately follows that the trace risk R τ is obtained at a vertex of the beliefs of est MDP τ . We obtain the following refinement over Theorem 1: Theorem 2. For every τ and r: R r (τ ) = max bel∈V(est MDP (τ )) s∈S bel(s) · r(s).   A monitor thus only needs to track the vertices. Furthermore, est up MDP (B, z) can be adapted to compute only vertices by limiting ς bel to S → Act.

Exponential lower bounds on the relevant vertices.
We show that a monitor in general cannot avoid an exponential blow-up in the beliefs it tracks. First observe that updating bel yields up to s |Act(s)| new beliefs (vertex or not), a prohibitively large number. The number of vertices is also exponential: Lemma 6. There exists a family of MDPs M n with 2n + 1 states such that |V(est MDP (τ ))| = 2 n for every τ with |τ | > 2.
Proof sketch. We construct M n with n = 3, that is, M 3 in Fig. 4(a). For this MDP and τ = AAA, |V(est MDP (τ ))| = 2 3 . In particular, observe how the belief factorizes into a belief within each component C i = {h i , l i } and notice that M n has components C 1 to C n . In particular, for each component, the belief being that we are with probability mass 1 /n (for n = 3, 1 /3) in the 'low' state l i or the 'high' state h i . We depict the beliefs in Fig. 4(b,c,d). Thus, for any τ with |τ | > 2 we can compactly represent V(est MDP (τ )) as bit-strings of length n. Concretely, the belief {h 1 , l 2 , l 3 → 1 /3, l 1 , h 2 , h 3 → 0} maps to 100, and {h 1 , l 2 , h 3 → 1 /3, l 1 , h 2 , l 3 → 0} maps to 101.
These are exponentially many beliefs for bit strings of length n.
One might ask whether a symbolic encoding of an exponentially large set may result in a more tractable approach to filtering. While Theorem 2 allows to compute the associated risk from a set of linear constraints with standard techniques, it is not clear whether the concise set of constraints can be efficiently constructed and updated in every step. We leave this concern for future work.
In the remainder we investigate whether we need to track all these beliefs. First, when the monitor is unaware of the state-risk, this is trivially unavoidable. More precisely, all vertices may induce the maximal weighted trace risk by choosing an appropriate state-risk: Proof sketch. We construct r such that r(s) > r(s ) if bel(s) > bel(s ). Second, even if the monitor is aware of the state risk r, it may not be able to prune enough vertices to avoid exponential growth. The crux here is that while some of the current beliefs may induce a smaller risk, an extension of the trace may cause the belief to evolve into a belief that induces the maximal risk.
Theorem 3. There exist MDPs M n a τ with B := V(est MDP (τ )) and a staterisk r such that |B| = 2 n and for all bel ∈ B exists τ ∈ Z + with R r (τ · τ ) > sup bel∈B s bel(s) · r(s), where B = est up MDP (B \ {bel}, τ ). It is helpful to understand this theorem as describing the outcome of a game between monitor and environment: The statement says if the monitor decides to drop some vertices from est MDP τ , the environment may produce an observation trace τ that will lead the monitor to underestimate the weighted risk at R r (τ · τ ).
Proof sketch. We extend the construction of Fig. 4(a) with choices to go to a final state. The full proof sketch can be found in Appendix C.

Approximation by pruning
Finally, we illustrate that we cannot simply prune small probabilities from beliefs. This indicates that an approximative version of filtering for the monitoring problem is nontrivial. Reconsider observing z 0 z 0 in the MDP of Fig. 3, and, for the sake of argument, let us prune the (small) entry s 3 → 1 /4 to 0. Now, continuing with the trace z 0 z 0 z 1 , we would update the beliefs from before and then conclude that this trace cannot be observed with positive probability. With pruning, there is no upper bound on the difference between the computed R τ and the actual R τ . Thus, forward filtering is, in general, not tractable on MDPs.

Unrolling with Model Checking
We present a tractable algorithm for the monitoring problem. Contrary to filtering, this method incorporates the state risk. We briefly consider the qualitative case. An algorithm that solves that problem iteratively guesses a successor such that the given trace has positive probability, and reaches a state with sufficient risk. The algorithm only stores the current and next state and a counter. This result implies the existence of a polynomial time algorithm, e.g., using a graph-search on a graph growing in |τ |. There also is a deterministic algorithm with space complexity O(log 2 (|M| + |τ |)), which follows from applying Savitch's Theorem [45], but that algorithm has exponential time complexity.
We now present a tractable algorithm for the quantitative case, where we need to store all paths. We do this efficiently by storing an unrolled MDP with these paths using ideas from [9,19]. In particular, on this MDP, we can efficiently obtain the scheduler that optimizes the risk by model checking rather than enumerating over all schedulers explicitly. We give the result before going into details.
The problem is P-hard, as unary-encoded step-bounded reachability is P-hard [40]. It remains to show a P-time algorithm 8 , which is outlined below. Roughly, the algorithm constructs an MDP M from M in three conceptual steps, such that the maximal probability of reaching a state in M coincides with the R r (τ ). The former can be solved by linear programming in polynomial time. The downside is that even in the best case, the memory consumption grows linearly in |τ |.
We outline the main steps of the algorithm and exemplify them below: First, we transform M into an MDP M with deterministic state observations, i.e., with obs : S → Z. This construction is detailed in [19,Remark 1], and runs in polynomial time. The new initial distribution takes into account the initial observation and the initial distribution. Importantly, for each path π and each trace τ , obs tr (π)(τ ) is preserved. From here, the idea for the algorithm is a tailored adaption of the construction for conditional reachability probabilities in [9]. We ensure that r(s) ∈ [0, 1] by scaling r and λ accordingly. Now, we construct a new Observe that the risk is now the supremum of conditioned reachability probabilities over paths that reach , conditioned by the trace τ . The MDP M is only polynomially larger. Then, we construct MDP M by copying M and replacing (part of) the transition relation P by P such that paths π with τ ∈ obs tr (π) are looped back to the initial state (resembling rejection sampling). Formally, The maximal conditional reachability probability in M is the maximal reachability probability in M [9]. Maximal reachability probabilities can be computed by solving a linear program [42], and can thus be computed in polynomial time.
Example 2. We illustrate the construction in Fig. 5. In Fig. 5(a), we depict an MDP M, with ι = {s 0 , s 1 → 1 /2}. Furthermore, let τ = z 0 z 0 and let r(s 0 ) = 1 and r(s 1 ) = 2. Let obs(s 0 ) = {z 0 → 1} and obs(s 1 ) = {z 0 → 1 /4, z 1 → 3 /4}. State s 1 has two possible observations, so we split s 1 into s 1 and s 2 in MDP M , each with their own observations. Any transition into s 1 is now split. As |τ | = 2, we unroll the MDP M into MDP M to represent two steps, and add goal and sink states. After rescaling, we obtain that r(s 0 ) = 1 /2, whereas r(s 1 ) = r(s 2 ) = 2 /2 = 1, and we add the appropriate outgoing transitions to the states s 1 * . In a final step, we create MDP M from M : we reroute all probability mass that does not agree with the observations to the initial states. Now, R r (z 0 z 0 ) is given by the probability to reach, in M , in an unbounded number of steps, .
The construction also implies that maximizing over a finite set of schedulers, namely the deterministic schedulers with a counter from 0 to |τ |, suffices. We  denote this class Σ DC (|τ |). Formally, a scheduler is in Σ DC (k) if for all π, π : Lemma 8. For every τ , it holds that The crucial idea underpinning this lemma is that memoryless schedulers suffice for the unrolling, and that the states of the unrolling can be uniquely mapped to a state and the length of the history for every π through M. By reducing stepbounded reachability we can also show that this set of schedulers is necessary [4].

Empirical Evaluation
Implementation. We provide prototype implementations for both filtering-and model-checking-based approaches from Sec. 3, built on top of the probabilistic model checker Storm [30]. We provide a schematic setup of our implementation in Fig. 6. As input, we consider a symbolic description of MDPs with state-based observation labels, based on an extended dialect of the Prism language. We define the state risk in this MDP via a temporal property (given as a PCTL formula), and obtain the concrete state-risk by model checking. We take a seed that yields a trace using the simulator. For the experiments, actions are resolved uniformly in this simulator 9 . The simulator iteratively feeds observations into the monitor, running either of our two algorithms (implemented in C++). After each observation z i , the monitor computes the risk R i having observed z 0 . . . z i . We flexibly combine these components via a Python API 10 .
For filtering as in Sec. 4, we provide a sparse data structure for beliefs that is updated using only deterministic schedulers. This is sufficient, see Lemma 4. To further prune the set of beliefs, we implement an SMT-driven elimination [47] of interior beliefs, inside of the convex hull 11 . We construct the unrolling as described in Sec. 5 and apply model checking via any sparse engines in Storm.
Set-up. For each benchmark described below, we sampled 50 random traces using seeds 0-49 of lengths up to |τ | = 500. We are interested in the promptness, that is, the delay of time between getting an observation z i and returning corresponding risk r i , as well as the cumulative performance obtained by summing over the promptness along the trace. We use a timeout of 1 second for this query. We compare the forward filtering (FF) approach with and without convex hull (CH) reduction, and the model unrolling approach (UNR) with two model checking engines of Storm: exact policy iteration (EPI, [42]) and optimistic value iteration (OVI, [28]). All experiments are run on a MacBook Pro MV962LL/A, using a single core. The memory limit of 6GB was not violated. We use Z3 [37] as SMT-solver [11] for the convex hull reduction.
Benchmarks. We present three benchmark families, all MDPs with a combination of probabilities, nondeterminism and partial observability.
Airport-A is as in Sec. 1, but with a higher resolution for both ground vehicle in the middle lane and the plane. Airport-B has a two-state sensor model with stochastic transitions between them.
Refuel-A models robots with a depleting battery and recharging stations. The world model consists of a robot moving around in a D×D grid with some dedicated charging cells, where each action costs energy. The risk is to deplete the battery within a fixed horizon. Refuel-B is a two-state sensor variant.
Evade-I is inspired by a navigation task in a multi-agent setting in a D×D grid. The monitored robot moves randomly, and the risk is defined as the probability of crashing with the other robot. The other robot has an internal incentive in the form of a cardinal direction, and nondeterministically decides to move or to uniformly randomly change its incentive. The monitor observes everything except the incentive of the other robot. Evade-V is an alternative navigation task: Contrary to above, the other robot does not have an internal state and indeed navigates nondeterministically in one of the cardinal directions. We only observe the other robot location is within the view range.
Results. We split our results in two tables. In Table 1, we give an ID for every benchmark name and instance, along with the size of the MDP (nr. of states |S| and transitions |P |) our algorithms operate on. We consider the promptness Table 1. Performance for promptness of online monitoring on various benchmarks. after prefixes of length |τ |. In particular, for forward filtering with the convex hull optimization, we give the number N of traces that did not time out before, and consider the average T avg and maximal time T max needed (over all sampled traces that did not time-out before). Furthermore, we give the average, B avg , and maximal, B max , number of beliefs stored (after reduction), and the average, D avg , and maximal, D max , dimension of the belief support. Likewise, for unrolling with exact model checking, we give the number N of traces that did not time out before, and we consider average T avg and maximal time T max , as well as the average size and maximal number of states of the unfolded MDP. In Table 2, we consider for the benchmarks above the cumulative performance. In particular, this table also considers an alternative implementation for both FF and UNR. We use the IDs to identify the instance, and sum for each prefix of length |τ | the time. For filtering, we recall the number of traces N that did not time out, the average and maximal cumulative time along the trace, the average cumulative number of beliefs that were considered, and the average cumulative number of beliefs eliminated. For the case without convex hull, we do not eliminate any vertices. For unrolling, we report average T avg and maximal cumulative time using EPI, as well as the time required for model building, Bld % (relative to the total time, per trace). We compare this to the average and maximal cumulative time for using OVI (notice that building times remain approximately the same).
Discussion. The results from our prototype show that conservative (sound) predictive modeling of systems that combine probabilities, nondeterminism and partial observability is within reach with the methods we proposed and stateof-the-art algorithms. Both forward filtering and an unrolling-based approaches have their merits. The practical results thus slightly diverge from the complexity results in Sec. 3.1, due to structural properties of some benchmarks. In particular, for airport-A and refuel-A, the nondeterminism barely influences the belief, and so there is no explosion, and consequentially the dimension of the belief is sufficiently small that the convex hull can be efficiently computed. Rather than the number of states, this belief dimension makes evade-V a difficult benchmark 12 . If many states can be reached with a particular trace, and if along these paths there are some probabilistic states, forward filtering suffers significantly. We see that if the benchmark allows for efficacious forward filtering, it is not slowed down in the way that unrolling is slower on longer traces. For UNR, we observe that OVI is typically the fastest, but EPI does not suffer from the numerical worst-cases as OVI does. If an observation trace is unlikely, the unrolled MDP constitutes a numerically challenging problem, in particular for value-iteration based model checkers, see [27]. For FF, the convex hull computation is essential for any dimension, and eliminating some vertices in every step keeps the number of belief states manageable.

Related work
We are not the first to consider model-based runtime verification in the presence of partial observability and probabilities. Runtime verification with state estimation on hidden Markov models (HMM)-without nondeterminism has been studied for various types of properties [49,55,52] and has been extended to hybrid systems [50]. The tool Prevent focusses on black-box systems by learning an HMM from a set of traces. The HMM approximates (with only convergence-inthe-limit guarantees) the actual system [6], and then estimates during runtime the most likely trace rather than estimating a distribution over current states.
Extensions consider symmetry reductions on the models [7]. These techniques do not make a conservative (sound) risk estimation. The recent framework for runtime verification in the presence of partial observability [23] takes a more strict black-box view and cannot provide state estimates. Finally, [26] chooses to have partial observability to make monitoring of software systems more efficient, and [56] monitors a noisy sensor to reduce energy consumption.
State beliefs are studied when verifying HMMs [57], where the question whether a sequence of observations likely occurs, or which HMM is an adequate representation of a system [36]. State beliefs are prominent in the verification of partially observable MDPs [39,32,16], where one can observe the actions taken (but the problem itself is to find the right scheduler). Our monitoring problem can be phrased as a special case of verification of partially observable stochastic games [20], but automatic techniques for those very general models are lacking. Likewise, the idea of shielding (pre)computes all action choices that lead to safe behavior [15,3,24,35,5,34]. For partially observable settings, shielding again requires to compute partial-information schedulers [38,21], contrary to our approach. Partial observability has also been studied in the context of diagnosability, studying if a fault has occurred (in the past) [14], or what actions uncover faults [13]. We, instead assume partial observability in which we do detect faults, but want to estimate the risk that these faults occur in the future.
The assurance framework for reinforcement learning [41] implicitly allows for stochastic behavior, but cannot cope with partial observability or nondeterminism. Predictive monitoring has been combined with deep learning [17] and Bayesian inference [22], where the key problem is that the computation of an imminent failure is too expensive to be done exactly. More generally, learning automata models has been motivated with runtime assurance [1,53]. Testing approaches statistically evaluate whether traces are likely to be produced by a given model [25]. The approach in [2] studies stochastic black-box systems with controllable nondeterminism and iteratively learns a model for the system.

Conclusion
We have presented the first framework for monitoring based on a trace of observations on models that combine nondeterminism and probabilities. Future work includes heuristics for approximate monitoring and for faster convex hull computations, and to apply this work to grey-box (learned) models.

A On qualitative variants of the problem
We may reformulate the qualitative problem as follows: First, define the risk as the maximum over reached states, i.e., This maximum risk estimation conservatively states the highest risk that can be achieved after an observation. The quantitative (qualitative) maximal risk estimation problem analogously is to decide R max r (τ ) > λ with λ ≥ 0 (λ = 0).
Lemma 9. The qualitative risk estimation problem and the quantitative and qualitative maximum risk estimation problem are all logspace interreducible.
Proof (Lemma 1 ). The qualitative case is a special case of the quantitative case. Now consider the quantitative case, and assume some fixed λ > 0. Observe that for any fixed scheduler, π Pr σ (π | τ ) = 1, and that there is a unique path π such that Pr σ (π | τ ) = 1. We can reduce the quantitative maximum risk estimation problem to a qualitative monitoring problem as follows. We modify the state risk function by checking state-wise whether the risk at a state is above λ. If it is, then we set the state risk to 1, and to zero otherwise. If R r (τ ) > λ for some λ ∈ R ≥0 , then there is a path π ∈ Π |τ | M with r(π ↓ ) > λ. This in turn means that Pr σ (π | τ ) · r(π ↓ ) > 0. The other way around, if the sum over all paths is zero, then there is no such path.
-The qualitative weighted risk estimation problem and the qualitative maximum risk estimation problem coincide. The qualitative weighted risk estimation problem and the qualitative maximum risk estimation problem coincide. In the qualitative case R r (τ ) > 0 holds when at least one path π matches the trace τ with a positive probability and r(π ↓ ) > 0. In this case, R max r (τ ) > 0 also holds, since r(π ↓ ) > 0 and obs tr (π)(τ ) > 0. When R max r (τ ) > 0 holds then there is a path π such that r(π ↓ ) > 0 and obs tr (π)(τ ) > 0. This implies that Pr(σ)(π | τ ) > 0 and in turn that R r (τ ) > 0.
-The qualitative weighted risk estimation problem and the quantitative maximum risk estimation problem are logspace interreducible. From the qualitative weighted risk estimation problem to the qualitative maximum risk estimation problem is trivial and follows from the last lemma since the qualitative maximum risk estimation problem is an instance of the quantitative maximum risk estimation problem.
In the other direction, we can reduce the quantitative maximum risk estimation problem to a qualitative weighted risk estimation problem as follows. We modify the state risk function by checking state-wise whether the risk at a state is above λ. If it is, then we set the state risk to 1, and to zero otherwise. If R max r (τ ) > λ for some λ ∈ R ≥0 , then there is a path π ∈ Π |τ | M with r(π ↓ ) > λ. This in turn means that Pr σ (π | τ ) · r(π ↓ ) > 0. Furthermore, if R max r (τ ) ≤ λ, then for all π ∈ Π |τ | M , it holds that Pr σ (π | τ ) · r(π ↓ ) = 0. Both reduction can be performed in logspace. In the first direction we only need to set the lambda. In the second direction we just need to iterate over the states and change the risk function accordingly.

C Proof of Thm 3
We extend the construction of Fig. 4(a), to obtain the MDP M 3 in Fig. 7. By taking an action H i or L i , the probability mass leaves both the low and high-state of a component C i . In all other components, the probability mass remains. We can represent states with extended bit-strings, concretely, we can think of {h 1 , l 3 , → 1 /3, l 1 , l 2 , h 2 , h 3 , ⊥ → 0} as 10, and {⊥ → 2 /3, l 3 , → 1 /3, l 1 , l 2 , h 1 , h 2 , h 3 , ⊥ → 0} as 0.  We define the risk as r( ) = 1 and r(s) = 0 for all s = . Thus, the belief has maximal risk. Now, we show that if we have B := V(est MDP (AAA)) = B n . Let us fix bel := 100. Then B = est up MDP (B \ {bel}, H 1 L 2 L 3 ) does not contain bel := , but V(est MDP (AAAH 1 L 2 L 3 )) does. Thus, the estimated risk after AAAH 1 L 2 L 3 will be below the maximum if we remove bel after seeing AAA. Symmetrically, we can show that all other beliefs need to remain in B.