Sound Value Iteration
Abstract
Computing reachability probabilities is at the heart of probabilistic model checking. All model checkers compute these probabilities in an iterative fashion using value iteration. This technique approximates a fixed point from below by determining reachability probabilities for an increasing number of steps. To avoid results that are significantly off, variants have recently been proposed that converge from both below and above. These procedures require starting values for both sides. We present an alternative that does not require the a priori computation of starting vectors and that converges faster on many benchmarks. The crux of our technique is to give tight and safe bounds—whose computation is cheap—on the reachability probabilities. Lifting this technique to expected rewards is trivial for both Markov chains and MDPs. Experimental results on a large set of benchmarks show its scalability and efficiency.
1 Introduction
Markov decision processes (MDPs) [1, 2] have their roots in operations research and stochastic control theory. They are frequently used for stochastic and dynamic optimization problems and are widely applicable in, e.g., stochastic scheduling and robotics. MDPs are also a natural model in randomized distributed computing where coin flips by the individual processes are mixed with nondeterminism arising from interleaving the processes’ behaviors. The central problem for MDPs is to find a policy that determines what action to take in the light of what is known about the system at the time of choice. The typical aim is to optimize a given objective, such as minimizing the expected cost until a given number of repairs, maximizing the probability of being operational for 1,000 steps, or minimizing the probability to reach a “bad” state.
Probabilistic model checking [3, 4] provides a scalable alternative to tackle these MDP problems, see the recent surveys [5, 6]. The central computational issue in MDP model checking is to solve a system of linear inequalities. In absence of nondeterminism—the MDP being a Markov Chain (MC)—a linear equation system is obtained. After appropriate precomputations, such as determining the states for which no policy exists that eventually reaches the goal state, the (in)equation system has a unique solution that coincides with the extremal value that is sought for. Possible solution techniques to compute such solutions include policy iteration, linear programming, and value iteration. Modern probabilistic model checkers such as PRISM [7] and Storm [8] use value iteration by default. This approximates a fixed point from below by determining the probabilities to reach a target state within k steps in the kth iteration. The iteration is typically stopped if the difference between the value vectors of two successive (or vectors that are further apart) is below the desired accuracy \({\varepsilon }\).
This procedure however can provide results that are significantly off, as the iteration is stopped prematurely, e.g., since the probability mass in the MDP only changes slightly in a series of computational steps due to a “slow” movement. This problem is not new; similar problems, e.g., occur in iterative approaches to compute longrun averages [9] and transient measures [10] and pop up in statistical model checking to decide when to stop simulating for unbounded reachability properties [11]. As recently was shown, this phenomenon does not only occur for hypothetical cases but affects practical benchmarks of MDP model checking too [12]. To remedy this, Haddad and Monmege [13] proposed to iteratively approximate the (unique) fixed point from both below and above; a natural termination criterion is to halt the computation once the two approximations differ less than \(2{\cdot }{\varepsilon }\). This scheme requires two starting vectors, one for each approximation. For reachability probabilities, the conservative values zero and one can be used. For expected rewards, it is nontrivial to find an appropriate upper bound—how to “guess” an adequate upper bound to the expected reward to reach a goal state? Baier et al. [12] recently provided an algorithm to solve this issue.
 (i)
the probability for reaching a target state within k steps and
 (ii)
the probability for reaching a target state only after k steps.
We obtain (i) via k iterations of (standard) value iteration. A second instance of value iteration computes the probability that a target state is still reachable after k steps. We show that from this information safe lower and upper bounds for (ii) can be derived. We illustrate that the same idea can be applied to expected rewards, topological value iteration [14], and GaussSeidel value iteration. We also discuss in detail its extension to MDPs and provide extensive experimental evaluation using our implementation in the model checker Storm [8]. Our experiments show that on many practical benchmarks we need significantly fewer iterations, yielding a speedup of about 20% on average. More importantly though, is the conceptual simplicity of our approach.
2 Preliminaries
For a finite set S and vector \(x \in \mathbb {R}^{S}\), let \({x[s]} \in \mathbb {R}\) denote the entry of x that corresponds to \(s \in S\). Let \(S' \subseteq S\) and \(a \in \mathbb {R}\). We write \({x[S']} = a\) to denote that \({x[s]} = a\) for all \(s \in S'\). Given \(x,y \in \mathbb {R}^{S}\), \(x \le y\) holds iff \({x[s]} \le {y[s]}\) holds for all \(s \in S\). For a function \(f :\mathbb {R}^{S} \rightarrow \mathbb {R}^{S}\) and \(k \ge 0\) we write \(f^k\) for the function obtained by applying f k times, i.e., \(f^0(x) = x\) and \(f^k(x) = f(f^{k1}(x))\) if \(k>0\).
2.1 Probabilistic Models and Measures
We briefly present probabilistic models and their properties. More details can be found in, e.g., [15].
Definition 1

S is a finite set of states, \( Act \) is a finite set of actions, \({s_{I}}\) is the initial state,

\(\mathbf {P}:S \times Act \times S \rightarrow [0,1]\) is a transition probability function satisfying \(\sum _{s' \in S} \mathbf {P}(s, \alpha , s') \in \{0,1\} \) for all \(s \in S, \alpha \in Act \), and

\(\rho :S \times Act \rightarrow \mathbb {R}\) is a reward function.
Example 1
Figure 1 shows an example MC and an example MDP.
We often simplify notations for MCs by omitting the (unique) action. For an MDP \(\mathcal {M}= (S, Act , \mathbf {P}, {s_{I}}, \rho )\), the set of enabled actions of state \(s \in S\) is given by \( Act (s) = \{ \alpha \in Act \mid \sum _{s' \in S} \mathbf {P}(s, \alpha , s') = 1 \}\). We assume that \( Act (s) \ne \emptyset \) for each \(s \in S\). Intuitively, upon performing action \(\alpha \) at state s reward \(\rho (s,\alpha )\) is collected and with probability \(\mathbf {P}(s, \alpha , s')\) we move to \(s' \in S\). Notice that rewards can be positive or negative.
A (deterministic) scheduler for \(\mathcal {M}\) is a function \(\sigma : Paths _{ fin }^{\mathcal {M}} \rightarrow Act \) such that \(\sigma (\hat{\pi }) \in Act ( last (\hat{\pi }))\) for all \(\hat{\pi }\in Paths _{ fin }^{\mathcal {M}}\). The set of (deterministic) schedulers for \(\mathcal {M}\) is \(\mathfrak {S}^{\mathcal {M}}\). \(\sigma \in \mathfrak {S}^{\mathcal {M}}\) is called positional if \(\sigma (\hat{\pi })\) only depends on the last state of \(\hat{\pi }\), i.e., for all \(\hat{\pi }, \hat{\pi }' \in Paths _{ fin }^{\mathcal {M}}\) we have \( last (\hat{\pi }) = last (\hat{\pi }')\) implies \(\sigma (\hat{\pi }) = \sigma (\hat{\pi }')\). For MDP \(\mathcal {M}\) and scheduler \(\sigma \in \mathfrak {S}^{\mathcal {M}}\) the probability measure over finite paths is given by \({\mathrm {Pr}}_ fin ^{\mathcal {M}, \sigma } : Paths _{ fin }^{\mathcal {M}, {s_{I}}} \rightarrow [0,1]\) with \( {\mathrm {Pr}}_ fin ^{\mathcal {M}, \sigma } (s_0 \dots s_n) = \prod _{i=0}^{n1} \mathbf {P}(s_i, \sigma (s_0\dots s_i), s_{i+1}). \) The probability measure \({\mathrm {Pr}}^{\mathcal {M}, \sigma }\) over measurable sets of infinite paths is obtained via a standard cylinder set construction [15].
Definition 2
(Reachability Probability). The reachability probability of MDP \(\mathcal {M}= (S, Act , \mathbf {P}, {s_{I}}, \rho )\), \(G \subseteq S\), and \(\sigma \in \mathfrak {S}^{\mathcal {M}}\) is given by \({\mathrm {Pr}}^{\mathcal {M}, \sigma }({\lozenge }G)\).
For \(k \in \mathbb {N}\cup \{\infty \}\), the function \({\blacklozenge }^{\le k} G :{\lozenge }G \rightarrow \mathbb {R}\) yields the kbounded reachability reward of a path \(\pi = s_0 \alpha _0 s_1 \dots \in {\lozenge }G\). We set \({\blacklozenge }^{\le k} G(\pi ) = \sum _{i = 0}^{j1} \rho (s_i, \alpha _i)\), where \(j = \min (\{i\ge 0 \mid s_i \in G \} \cup \{k\})\). We write \({\blacklozenge }G\) instead of \({\blacklozenge }^{ \le \infty } G\).
Definition 3
(Expected Reward). The expected (reachability) reward of MDP \(\mathcal {M}= (S, Act , \mathbf {P}, {s_{I}}, \rho )\), \(G \subseteq S\), and \(\sigma \in \mathfrak {S}^{\mathcal {M}}\) with \({\mathrm {Pr}}^{\mathcal {M}, \sigma }({\lozenge }G) = 1\) is given by the expectation \({\mathbb {E}}^{\mathcal {M}, \sigma }({\blacklozenge }G) = \int _{\pi \in {\lozenge }G} {\blacklozenge }G (\pi ) \,\mathrm {d}{\mathrm {Pr}}^{\mathcal {M}, \sigma }(\pi )\).
We write \({\mathrm {Pr}}^{\mathcal {M}, \sigma }_s\) and \({\mathbb {E}}^{\mathcal {M}, \sigma }_s\) for the probability measure and expectation obtained by changing the initial state of \(\mathcal {M}\) to \(s \in S\). If \(\mathcal {M}\) is a Markov chain, there is only a single scheduler. In this case we may omit the superscript \(\sigma \) from \({\mathrm {Pr}}^{\mathcal {M}, \sigma }\) and \({\mathbb {E}}^{\mathcal {M}, \sigma }\). We also omit the superscript \(\mathcal {M}\) if it is clear from the context. The maximal reachability probability of \(\mathcal {M}\) and G is given by \({\mathrm {Pr}}^{\mathrm {max}}({\lozenge }G) = \max _{\sigma \in \mathfrak {S}^{\mathcal {M}}} {\mathrm {Pr}}^{\sigma }({\lozenge }G)\). There is a a positional scheduler that attains this maximum [16]. The same holds for minimal reachability probabilities and maximal or minimal expected rewards.
Example 2
Consider the MDP \(\mathcal {M}\) from Fig. 1(b). We are interested in the maximal probability to reach state \(s_4\) given by \(\mathrm {Pr}^{\mathrm {max}}({\lozenge }\{s_4 \})\). Since \(s_4\) is not reachable from \(s_3\) we have \(\mathrm {Pr}^{\mathrm {max}}_{s_3}({\lozenge }\{s_4\}) = 0\). Intuitively, choosing action \(\beta \) at state \(s_0\) makes reaching \(s_3\) more likely, which should be avoided in order to maximize the probability to reach \(s_4\). We therefore assume a scheduler \(\sigma \) that always chooses action \(\alpha \) at state \(s_0\). Starting from the initial state \(s_0\), we then eventually take the transition from \(s_2\) to \(s_3\) or the transition from \(s_2\) to \(s_4\) with probability one. The resulting probability to reach \(s_4\) is given by \(\mathrm {Pr}^{\mathrm {max}}({\lozenge }\{s_4 \}) = \mathrm {Pr}^{\sigma }({\lozenge }\{s_4 \}) = 0.3/ (0.1 + 0.3) = 0.75\).
2.2 Probabilistic Model Checking via Interval Iteration
In the following we present approaches to compute reachability probabilities and expected rewards. We consider approximative computations. Exact computations are handled in e.g. [17, 18] For the sake of clarity, we focus on reachability probabilities and sketch how the techniques can be lifted to expected rewards.
Reachability Probabilities. We fix an MDP \(\mathcal {M}= (S, Act , \mathbf {P}, {s_{I}}, \rho )\), a set of goal states \(G \subseteq S\), and a precision parameter \({\varepsilon }> 0\).
Problem 1
Compute an \({\varepsilon }\)approximation of the maximal reachability probability \({\mathrm {Pr}^{\mathrm {max}}({\lozenge }G)}\), i.e., compute a value \({r}\in [0,1]\) with \({r} {\mathrm {Pr}^{\mathrm {max}}({\lozenge }G)}  < {\varepsilon }\).
We briefly sketch how to compute such a value \({r}\) via interval iteration [12, 13, 19]. The computation for minimal reachability probabilities is analogous.
W.l.o.g. it is assumed that the states in G are absorbing. Using graph algorithms, we compute \({S}_{0} = \{ s \in S \mid {\mathrm {Pr}}^\mathrm {max}_s({\lozenge }G) =0 \}\) and partition the state space of \(\mathcal {M}\) into Open image in new window with \({S}_{?} = S \setminus (G \cup {S}_{0})\). If \({s_{I}}\in {S}_{0}\) or \({s_{I}}\in G\), the probability \({\mathrm {Pr}}^\mathrm {max}({\lozenge }G)\) is 0 or 1, respectively. From now on we assume \({s_{I}}\in {S}_{?}\).
We say that \(\mathcal {M}\) is contracting with respect to \(S' \subseteq S\) if \({\mathrm {Pr}}_s^{\sigma }({\lozenge }S') = 1\) for all \(s \in S\) and for all \(\sigma \in \mathfrak {S}^{\mathcal {M}}\). We assume that \(\mathcal {M}\) is contracting with respect to \(G \cup {S}_{0}\). Otherwise, we apply a transformation on the socalled end components^{1} of \(\mathcal {M}\), yielding a contracting MDP \(\mathcal {M}'\) with the same maximal reachability probability as \(\mathcal {M}\). Roughly, this transformation replaces each end component of \(\mathcal {M}\) with a single state whose enabled actions coincide with the actions that previously lead outside of the end component. This step is detailed in [13, 19].
A popular technique for this purpose is the value iteration algorithm [1]. Given a starting vector \(x \in \mathbb {R}^{S}\) with \({x[{S}_{0}]} = 0\) and \({x[G]} = 1\), standard value iteration computes \(f^k(x)\) for increasing k until \(\max _{s \in {S}} {f^k(x)[s]}  {f^{k1}(x)[s]} < \varepsilon \) holds for a predefined precision \(\varepsilon > 0\). As pointed out in, e.g., [13], there is no guarantee on the preciseness of the result \({r}= {f^k(x)[{s_{I}}]}\), i.e., standard value iteration does not give any evidence on the error \({r} {\mathrm {Pr}^{\mathrm {max}}({\lozenge }G)}\). The intuitive reason is that value iteration only approximates the fixpoint \(x^*\) from one side, yielding no indication on the distance between the current result and \(x^*\).
Example 3
Consider the MDP \(\mathcal {M}\) from Fig. 1(b). We invoked standard value iteration in PRISM [7] and Storm [8] to compute the reachability probability \(\mathrm {Pr}^{\mathrm {max}}({\lozenge }\{s_4 \})\). Recall from Example 2 that the correct solution is 0.75. With (absolute) precision \({\varepsilon }= 10^{6}\) both model checkers returned 0.7248. Notice that the user can improve the precision by considering, e.g., \({\varepsilon }= 10^{8}\) which yields 0.7497. However, there is no guarantee on the preciseness of a given result.
The interval iteration algorithm [12, 13, 19] addresses the impreciseness of value iteration. The idea is to approach the fixpoint \(x^*\) from below and from above. The first step is to find starting vectors \(x_\ell , x_u \in \mathbb {R}^{S}\) satisfying \({x_\ell [{S}_{0}]} = {x_u[{S}_{0}]} = 0\), \({x_\ell [G]} = {x_u[G]} = 1\), and \(x_\ell \le x^* \le x_u\). As the entries of \(x^*\) are probabilities, it is always valid to set \({x_\ell [{S}_{?}]} = 0\) and \({x_u[{S}_{?}]} = 1\). We have \(f^k(x_\ell ) \le x^* \le f^k(x_u)\) for any \(k \ge 0\). Interval iteration computes \(f^k(x_\ell )\) and \(f^k(x_u)\) for increasing k until \(\max _{s \in S} {f^k(x_\ell )[s]}  {f^{k}(x_u)[s]} < 2 \varepsilon \). For the result \({r}= \nicefrac {1}{2} \cdot ({f^k(x_\ell )[{s_{I}}]} + {f^k(x_u)[{s_{I}}]}) \) we obtain that \({r} {\mathrm {Pr}^{\mathrm {max}}({\lozenge }G)} < \varepsilon \), i.e., we get a sound approximation of the maximal reachability probability.
Example 4
We invoked interval iteration in PRISM and Storm to compute the reachability probability \(\mathrm {Pr}^{\mathrm {max}}({\lozenge }\{s_4 \})\) for the MDP \(\mathcal {M}\) from Fig. 1(b). Both implementations correctly yield an \({\varepsilon }\)approximation of \(\mathrm {Pr}^{\mathrm {max}}({\lozenge }\{s_4 \})\), where we considered \({\varepsilon }= 10^{6}\). However, both PRISM and Storm required roughly 300,000 iterations for convergence.
Expected Rewards. Whereas [13, 19] only consider reachability probabilities, [12] extends interval iteration to compute expected rewards. Let \(\mathcal {M}\) be an MDP and G be a set of absorbing states such that \(\mathcal {M}\) is contracting with respect to G.
Problem 2
Compute an \({\varepsilon }\)approximation of the maximal expected reachability reward \({{\mathbb {E}^{\mathrm {max}}}({\blacklozenge }G)}\), i.e., compute a value \({r}\in \mathbb {R}\) with \({r} {{\mathbb {E}^{\mathrm {max}}}({\blacklozenge }G)}  < {\varepsilon }\).
3 Sound Value Iteration for MCs
We present an algorithm for computing reachability probabilities and expected rewards as in Problems 1 and 2. The algorithm is an alternative to the interval iteration approach [12, 20] but (i) does not require an a priori computation of starting vectors \(x_\ell , x_u \in \mathbb {R}^{S}\) and (ii) converges faster on many practical benchmarks as shown in Sect. 5. For the sake of simplicity, we first restrict to computing reachability probabilities on MCs.
In the following, let \(\mathcal {D}= (S, \mathbf {P}, {s_{I}}, \rho )\) be an MC, \(G \subseteq S\) be a set of absorbing goal states and \({\varepsilon }> 0\) be a precision parameter. We consider the partition Open image in new window as in Sect. 2.2. The following theorem captures the key insight of our algorithm.
Theorem 1

\({{\mathrm {Pr}}({\lozenge }^{\le {k}} G)}\), the probability to reach a state in G within k steps, and

\({\mathrm {Pr}}( {\square }^{\le k} {S}_{?})\), the probability to stay in \({S}_{?}\) during the first k steps.
This can be realized via a valueiteration based procedure. The obtained bounds on \({{\mathrm {Pr}}({\lozenge }G)}\) can be tightened arbitrarily since \({\mathrm {Pr}}({\square }^{\le k} {S}_{?})\) approaches 0 for increasing k. In the following, we address the correctness of Theorem 1, describe the details of our algorithm, and indicate how the results can be lifted to expected rewards.
3.1 Approximating Reachability Probabilities
To approximate the reachability probability \({{\mathrm {Pr}}({\lozenge }G)}\), we consider the step bounded reachability probability \({{\mathrm {Pr}}({\lozenge }^{\le {k}} G)}\) for \(k \ge 0\) and provide a lower and an upper bound for the ‘missing’ probability \({{\mathrm {Pr}}({\lozenge }G)} {{\mathrm {Pr}}({\lozenge }^{\le {k}} G)}\). Note that \( {\lozenge }G\) is the disjoint union of the paths that reach G within k steps (given by \({\lozenge }^{\le k} G\)) and the paths that reach G only after k steps (given by \({S}_{?} {{\mathrm{\mathcal {U}}}}^{> k} G\)).
Lemma 1
For any \(k \ge 0\) we have \({{\mathrm {Pr}}({\lozenge }G)}= {{\mathrm {Pr}}({\lozenge }^{\le {k}} G)} + {\mathrm {Pr}}({S}_{?} {{\mathrm{\mathcal {U}}}}^{> k} G)\).
Proposition 1
Remark 1
The bounds for \({{\mathrm {Pr}}({\lozenge }G)}\) given by Proposition 1 are similar to the bounds obtained after performing k iterations of interval iteration with starting vectors \(x_\ell , x_u \in \mathbb {R}^{S}\), where \({x_\ell [{S}_{?}]} = \ell \) and \({x_u[{S}_{?}]} = u\).
We now discuss how the bounds \(\ell \) and u can be obtained from the step bounded probabilities \( {\mathrm {Pr}}_s({\lozenge }^{\le k} G)\) and \({\mathrm {Pr}}_s({\square }^{\le k} {S}_{?})\) for \(s \in {S}_{?}\). We focus on the upper bound u. The reasoning for the lower bound \(\ell \) is similar.
Proposition 2
3.2 Extending the Value Iteration Approach
Recall the standard value iteration algorithm for approximating \({{\mathrm {Pr}}({\lozenge }G)}\) as discussed in Sect. 2.2. The function \(f :\mathbb {R}^{S} \rightarrow \mathbb {R}^{S}\) for MCs simplifies to \({f(x)[{S}_{0}]} = 0\), \({f(x)[G]} = 1\), and \({f(x)[s]} = \sum _{s' \in S} \mathbf {P}(s,s') \cdot {x[s']}\) for \(s \in {S}_{?}\). We can compute the kstep bounded reachability probability at every state \(s \in S\) by performing k iterations of value iteration [15, Remark 10.104]. More precisely, when applying f k times on starting vector \(x \in \mathbb {R}^{S}\) with \({x[G]} = 1\) and \({x[S \setminus G]} = 0\) we get \( {\mathrm {Pr}}_s({\lozenge }^{\le k} G)={f^k(x)[s]}.\) The probabilities \({\mathrm {Pr}}_s({\square }^{\le k} {S}_{?})\) for \(s \in S\) can be computed similarly. Let \(h :\mathbb {R}^{S} \rightarrow \mathbb {R}^{S}\) with \({h(y)[S \setminus {S}_{?}]} = 0\) and \({h(y)[s]} = \sum _{s' \in S} \mathbf {P}(s,s') \cdot {y[s']}\) for \(s \in {S}_{?}\). For starting vector \(y \in \mathbb {R}^{S}\) with \({y[{S}_{?}]} = 1\) and \({y[S \setminus {S}_{?}]} = 0\) we get \({\mathrm {Pr}}_s({\square }^{\le k }{S}_{?}) = {h^k(y)[s]}\).
Algorithm 1 depicts our approach. It maintains vectors \(x_k,y_k \in \mathbb {R}^{S}\) which, after k iterations of the loop, store the kstep bounded probabilities \({\mathrm {Pr}}_s({\lozenge }^{\le k} G)\) and \({\mathrm {Pr}}_s({\square }^{\le k} {S}_{?})\), respectively. Additionally, the algorithm considers lower bounds \(\ell _k\) and upper bounds \(u_k\) such that the following invariant holds.
Lemma 2
After executing the loop of Algorithm 1 k times we have for all \(s \in {S}_{?}\) that \( {x_k[s]} = {\mathrm {Pr}}_s({\lozenge }^{\le k} G)\), \({y_k[s]} = {\mathrm {Pr}}_s({\square }^{\le k} {S}_{?})\), and \(\ell _k \le {\mathrm {Pr}}_s({\lozenge }G) \le u_k \).
The correctness of the algorithm follows from Theorem 1. Termination is guaranteed since \({\mathrm {Pr}}({\lozenge }({S}_{0} \cup G) ) = 1\) and therefore \(\lim _{k\rightarrow \infty } {\mathrm {Pr}}( {\square }^{\le k} {S}_{?}) = {\mathrm {Pr}}( {\square }{S}_{?}) = 0\).
Theorem 2
Algorithm 1 terminates for any MC \(\mathcal {D}\), goal states G, and precision \({\varepsilon }> 0\). The returned value r satisfies \(r  {{\mathrm {Pr}}({\lozenge }G)} < \varepsilon \).
Example 5
3.3 Sound Value Iteration for Expected Rewards
We lift our approach to expected rewards in a straightforward manner. Let \(G \subseteq S\) be a set of absorbing goal states of MC \(\mathcal {D}\) such that \({{\mathrm {Pr}}({\lozenge }G)}= 1\). Further let \({S}_{?} = S \setminus G\). For \(k\ge 0\) we observe that the expected reward \({{\mathbb {E}}({\blacklozenge }G)}\) can be split into the expected reward collected within k steps and the expected reward collected only after k steps, i.e., \( {{\mathbb {E}}({\blacklozenge }G)}= {{\mathbb {E}}({\blacklozenge }^{\le {k}} G)} + \sum _{s \in {S}_{?}} {\mathrm {Pr}}({S}_{?} {{\mathrm{\mathcal {U}}}}^{=k} \{s\}) \cdot {\mathbb {E}}_s ({\blacklozenge }G). \) Following a similar reasoning as in Sect. 3.1 we can show the following.
Theorem 3
Recall the function \(g :\mathbb {R}^{S} \rightarrow \mathbb {R}^{S}\) from Sect. 2.2, given by \({g(x)[G]} = 0\) and \( {g(x)[s]} = \rho (s) + \sum _{s' \in S} \mathbf {P}(s, s') \cdot {x[s']} \) for \(s \in {S}_{?}\). For \(s \in S\) and \(x \in \mathbb {R}^{S}\) with \({x[S]} = 0\) we have \({\mathbb {E}}_s({\blacklozenge }^{\le k} G) = {g^k(x)[s]}\). We modify Algorithm 1 such that it considers function g instead of function f. Then, the returned value \({r}\) satisfies \(r  {{\mathbb {E}}({\blacklozenge }G)} < {\varepsilon }\).
3.4 Optimizations
Algorithm 1 can make use of initial bounds \(\ell _0, u_0 \in \mathbb {R}\) with \(\ell _0 \le {\mathrm {Pr}}_s({\lozenge }G) \le u_0\) for all \(s \in {S}_{?}\). Such bounds could be derived, e.g., from domain knowledge or during preprocessing [12]. The algorithm always chooses the largest available lower bound for \(\ell _k\) and the lowest available upper bound for \(u_k\), respectively. If Algorithm 1 and interval iteration are initialized with the same bounds, Algorithm 1 always requires as most as many iterations compared to interval iteration (cf. Remark 1).
Topological value iteration [14] employs the graphical structure of the MC \(\mathcal {D}\). The idea is to decompose the states S of \(\mathcal {D}\) into strongly connected components^{2} (SCCs) that are analyzed individually. The procedure can improve the runtime of classical value iteration since for a single iteration only the values for the current SCC have to be updated. A topological variant of interval iteration is introduced in [12]. Given these results, sound value iteration can be extended similarly.
4 Sound Value Iteration for MDPs
We extend sound value iteration to compute reachability probabilities in MDPs. Assume an MDP \(\mathcal {M}= (S, Act , \mathbf {P}, {s_{I}}, \rho )\) and a set of absorbing goal states G. For simplicity, we focus on maximal reachability probabilities, i.e., we compute \({\mathrm {Pr}^{\mathrm {max}}({\lozenge }G)}\). Minimal reachability probabilities and expected rewards are analogous. As in Sect. 2.2 we consider the partition Open image in new window such that \(\mathcal {M}\) is contracting with respect to \(G \cup {S}_{0}\).
4.1 Approximating Maximal Reachability Probabilities
Theorem 4
Example 6

\(\max _{\sigma \in \mathfrak {S}^{\mathcal {M}}} \hat{u}_1^\sigma = \hat{u}_1^{\sigma _\alpha } = 0 + 0.8 \cdot \max (0,1,0) = 0.8\) and

\(\max _{\sigma \in \mathfrak {S}^{\mathcal {M}}} \hat{u}_2^\sigma = \hat{u}_2^{\sigma _{\beta \beta }} = 0.42+0.16 \cdot \max (0.5,0.19,1) = 0.5.\)
4.2 Extending the Value Iteration Approach
The idea of our algorithm is to compute the bounds for \({\mathrm {Pr}^{\mathrm {max}}({\lozenge }G)}\) as in Theorem 4 for increasing \(k \ge 0\). Algorithm 2 outlines the procedure. Similar to Algorithm 1 for MCs, vectors \(x_k,y_k \in \mathbb {R}^{S}\) store the step bounded probabilities \(\mathrm {Pr}^{\sigma _k}_s({\lozenge }^{\le k} G)\) and \(\mathrm {Pr}^{\sigma _k}_s({\square }^{\le k} {S}_{?})\) for any \(s \in S\). In addition, schedulers \(\sigma _k\) and upper bounds \(u_k \ge \max _{s \in {S}_{?}} \mathrm {Pr}^{\mathrm {max}}_s({\lozenge }G)\) are computed in a way that Theorem 4 is applicable.
Lemma 3
After executing k iterations of Algorithm 2 we have for all \(s \in {S}_{?}\) that \( {x_k[s]} = \mathrm {Pr}^{\sigma _k}_s({\lozenge }^{\le k} G)\), \({y_k[s]} = \mathrm {Pr}^{\sigma _k}_s({\square }^{\le k} {S}_{?})\), and \(\ell _k \le \mathrm {Pr}^{\mathrm {max}}_s({\lozenge }G) \le u_k\), where \(\sigma _k \in \mathrm {arg\,max}_{\sigma \in \mathfrak {S}^{\mathcal {M}}} \mathrm {Pr}^{\sigma }_s({\lozenge }^{\le k} G) + \mathrm {Pr}^{\sigma }_s( {\square }^{\le k} {S}_{?}) \cdot u_k\).
Lemma 4
After executing Line 18 of Algorithm 2: \(u_k \ge \max _{s \in {S}_{?}} \mathrm {Pr}^{\mathrm {max}}_s ({\lozenge }G)\).

If \(v_\infty > u_{k1}\), then also \(\max _{s \in {S}_{?}} \frac{{x_k[s]}}{1{y_k[s]}} > u_{k1}\). Hence \(u_k = u_{k1} \ge u^*\).

If \(d_k \le v_\infty \le u_{k1}\), we can show that \(v_i \le v_{i1}\). It follows that for all \(i > 0\), \(v_{i1} \in U_k\), implying \(v_{i} \ge u^*\). Thus we get \(u_k = \max _{s \in {S}_{?}} \frac{{x_k[s]}}{1{y_k[s]}} \ge v_\infty \ge u^*\).

If \(v_\infty < d_k\) then there is an \(i \ge 0\) with \(v_i \ge d_k\) and \(u^* \le v_{i+1} < d_k\). It follows that \(u_k = d_k \ge u^*\).
Example 7
Theorem 5
Algorithm 2 terminates for any MDP \(\mathcal {M}\), goal states G and precision \(\varepsilon > 0\). The returned value r satisfies \(r {\mathrm {Pr}^{\mathrm {max}}({\lozenge }G)} \le \varepsilon \).
The correctness of the algorithm follows from Theorem 4 and Lemma 3. Termination follows since \(\mathcal {M}\) is contracting with respect to \({S}_{0} \cup G\), implying \(\lim _{k\rightarrow \infty } \mathrm {Pr}^{\sigma }({\square }^{\le k} {S}_{?}) = 0\). The optimizations for Algorithm 1 mentioned in Sect. 3.4 can be applied to Algorithm 2 as well.
5 Experimental Evaluation
Implementation. We implemented sound value iteration for MCs and MDPs into the model checker Storm [8]. The implementation computes reachability probabilities and expected rewards using explicit data structures such as sparse matrices and vectors. Moreover, Multiobjective model checking is supported, where we straightforwardly extend the value iterationbased approach of [22] to sound value iteration. We also implemented the optimizations given in Sect. 3.4.
The implementation is available at www.stormchecker.org.

all MCs, MDPs, and CTMCs from the PRISM benchmark suite [23],

several case studies from the PRISM website www.prismmodelchecker.org,

Markov automata accompanying IMCA [24], and

multiobjective MDPs considered in [22].
In total, 130 model and property instances were considered. For CTMCs and Markov automata we computed (untimed) reachability probabilities or expected rewards on the underlying MC and the underlying MDP, respectively. In all experiments the precision parameter was given by \({\varepsilon }= 10^{6}\).
Figure 3(a) depicts the model checking times for \(\mathrm {SVI}\) (xaxis) and \(\mathrm {II}\) (yaxis). For better readability, the benchmarks are divided into four plots with different scales. Triangles ( Open image in new window ) and circles ( Open image in new window ) indicate MC and MDP benchmarks, respectively. Similarly, Fig. 3(b) shows the required iterations of the approaches. We observe that \(\mathrm {SVI}\) converged faster and required fewer iterations for almost all MCs and MDPs. \(\mathrm {SVI}\) performed particularly well on the challenging instances where many iterations are required. Similar observations were made when comparing the topological variants of \(\mathrm {SVI}\) and \(\mathrm {II}\). Both approaches were still competitive if no a priori bounds are given to \(\mathrm {SVI}\). More details are given in [21].
Figure 4 indicates the model checking times of \(\mathrm {SVI}\) and \(\mathrm {II}\) as well as their topological variants. For reference, we also consider standard (unsound) value iteration \((\mathrm {VI})\). The xaxis depicts the number of instances that have been solved by the corresponding approach within the time limit indicated on the yaxis. Hence, a point (x, y) means that for x instances the model checking time was less or equal than y. We observe that the topological variant of \(\mathrm {SVI}\) yielded the best run times among all sound approaches and even competes with (unsound) \(\mathrm {VI}\).
6 Conclusion
In this paper we presented a sound variant of the value iteration algorithm which safely approximates reachability probabilities and expected rewards in MCs and MDPs. Experiments on a large set of benchmarks indicate that our approach is a reasonable alternative to the recently proposed interval iteration algorithm.
Footnotes
 1.
Intuitively, an end component is a set of states \(S' \subseteq S\) such that there is a scheduler inducing that from any \(s \in S'\) exactly the states in \(S'\) are visited infinitely often.
 2.
\(S' \subseteq S\) is a connected component if s can be reached from \(s'\) for all \(s,s' \in S'\). \(S'\) is a strongly connected component if no superset of \(S'\) is a connected component.
References
 1.Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Hoboken (1994)CrossRefGoogle Scholar
 2.Feinberg, E.A., Shwartz, A.: Handbook of Markov Decision Processes: Methods and Applications. Kluwer, Dordrecht (2002)CrossRefGoogle Scholar
 3.Katoen, J.P.: The probabilistic model checking landscape. In: LICS, pp. 31–45. ACM (2016)Google Scholar
 4.Baier, C.: Probabilistic model checking. In: Dependable Software Systems Engineering. NATO Science for Peace and Security Series  D: Information and Communication Security, vol. 45, pp. 1–23. IOS Press (2016)Google Scholar
 5.Etessami, K.: Analysis of probabilistic processes and automata theory. In: Handbook of Automata Theory. European Mathematical Society (2016, to appear)Google Scholar
 6.Baier, C., de Alfaro, L., Forejt, V., Kwiatkowska, M.: Probabilistic model checking. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds.) Handbook of Model Checking, pp. 963–999. Springer, Cham (2018)CrossRefGoogle Scholar
 7.Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic realtime systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642221101_47CrossRefGoogle Scholar
 8.Dehnert, C., Junges, S., Katoen, J.P., Volk, M.: A Storm is coming: a modern probabilistic model checker. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10427, pp. 592–600. Springer, Cham (2017). https://doi.org/10.1007/9783319633909_31CrossRefGoogle Scholar
 9.Katoen, J., Zapreev, I.S.: Safe onthefly steadystate detection for timebounded reachability. In: QEST, pp. 301–310. IEEE Computer Society (2006)Google Scholar
 10.Malhotra, M.: A computationally efficient technique for transient analysis of repairable markovian systems. Perform. Eval. 24(4), 311–331 (1996)CrossRefGoogle Scholar
 11.Daca, P., Henzinger, T.A., Kretínský, J., Petrov, T.: Faster statistical model checking for unbounded temporal properties. ACM Trans. Comput. Log. 18(2), 12:1–12:25 (2017)MathSciNetCrossRefGoogle Scholar
 12.Baier, C., Klein, J., Leuschner, L., Parker, D., Wunderlich, S.: Ensuring the reliability of your model checker: interval iteration for markov decision processes. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 160–180. Springer, Cham (2017). https://doi.org/10.1007/9783319633879_8CrossRefGoogle Scholar
 13.Haddad, S., Monmege, B.: Interval iteration algorithm for MDPs and IMDPs. Theor. Comput. Sci. 735, 111–131 (2017)MathSciNetCrossRefGoogle Scholar
 14.Dai, P., Weld, D.S., Goldsmith, J.: Topological value iteration algorithms. J. Artif. Intell. Res. 42, 181–209 (2011)MathSciNetzbMATHGoogle Scholar
 15.Baier, C., Katoen, J.P.: Principles of Model Checking. The MIT Press, Cambridge (2008)zbMATHGoogle Scholar
 16.Bertsekas, D.P., Tsitsiklis, J.N.: An analysis of stochastic shortest path problems. Math. Oper. Res. 16(3), 580–595 (1991)MathSciNetCrossRefGoogle Scholar
 17.Giro, S.: Efficient computation of exact solutions for quantitative model checking. In: QAPL. EPTCS, vol. 85, pp. 17–32 (2012)Google Scholar
 18.Bauer, M.S., Mathur, U., Chadha, R., Sistla, A.P., Viswanathan, M.: Exact quantitative probabilistic model checking through rational search. In: FMCAD, pp. 92–99. IEEE (2017)Google Scholar
 19.Brázdil, T., et al.: Verification of Markov decision processes using learning algorithms. In: Cassez, F., Raskin, J.F. (eds.) ATVA 2014. LNCS, vol. 8837, pp. 98–114. Springer, Cham (2014). https://doi.org/10.1007/9783319119366_8CrossRefGoogle Scholar
 20.Haddad, S., Monmege, B.: Reachability in MDPs: refining convergence of value iteration. In: Ouaknine, J., Potapov, I., Worrell, J. (eds.) RP 2014. LNCS, vol. 8762, pp. 125–137. Springer, Cham (2014). https://doi.org/10.1007/9783319114392_10CrossRefzbMATHGoogle Scholar
 21.Quatmann, T., Katoen, J.P.: Sound value iteration. Technical report, CoRR abs/1804.05001 (2018)Google Scholar
 22.Forejt, V., Kwiatkowska, M., Parker, D.: Pareto curves for probabilistic model checking. In: Chakraborty, S., Mukund, M. (eds.) ATVA 2012. LNCS, pp. 317–332. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642333866_25CrossRefzbMATHGoogle Scholar
 23.Kwiatkowska, M., Norman, G., Parker, D.: The PRISM benchmark suite. In: Proceedings of QEST, pp. 203–204. IEEE CS (2012)Google Scholar
 24.Guck, D., Timmer, M., Hatefi, H., Ruijters, E., Stoelinga, M.: Modelling and analysis of markov reward automata. In: Cassez, F., Raskin, J.F. (eds.) ATVA 2014. LNCS, vol. 8837, pp. 168–184. Springer, Cham (2014). https://doi.org/10.1007/9783319119366_13CrossRefzbMATHGoogle Scholar
 25.Quatmann, T., Katoen, J.P.: Experimental Results for Sound Value Iteration. figshare (2018). https://doi.org/10.6084/m9.figshare.6139052
Copyright information
<SimplePara><Emphasis Type="Bold">Open Access</Emphasis>This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.</SimplePara><SimplePara>The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.</SimplePara>