figure a
figure b

1 Introduction

Expected visiting times. Common questions for the quantitative analysis of Markov chains include reachability probabilities, stationary distributions, and expected rewards [34]. Many authors [19, 23, 24, 36, 44, 48, 55] have recognized the importance of another quantity called expected visiting times (EVTs), which describe the expected time a system spends in each state. EVTs are characterized as the unique solution of a linear equation system [36]. They are not only relevant in their own right, but also useful to obtain various other quantities, including the ones mentioned above. This applies particularly to forward analyses which aim at computing, e.g., the distribution over terminal states given an initial distribution.

Sound approximation of EVTs. In the context of (probabilistic) model checking, the two main requirements for any numeric procedure are scalability and soundness, i.e., the error in the reported result has to be bounded by a predefined threshold. Scalability is typically achieved via numerically robust iterative methods [52, 57, 59] such as the Jacobi or Gauss-Seidel method [57]. In general, these methods do not converge to the exact solution after a finite number of iterations. Thus, the procedure is usually stopped as soon as a termination criterion is satisfied [52]. However, standard stopping criteria such as small difference of consecutive iterations are not sound in the above sense: They do not actually indicate how close the approximation is to the true solution. Since the correctness of results in model checking, especially for safety-critical systems, is crucial, several authors have proposed sound iterative algorithms [6, 26, 30, 50]. While these works focus on computing quantities such as reachability probabilities and expected rewards, the sound computation of EVTs has not yet been studied.

Motivating example: Verifying sampling algorithms. To illustrate the use of EVTs in probabilistic verification tasks, consider the Markov chain in Figure 1. It is a finite-state model of a program — the Fast Dice Roller [42] — which takes as input an integer \(N \ge 1\) and produces a uniformly distributed output in \(\{1,\ldots ,N\}\) using unbiased coin flips only. The Fast Dice Roller thus solves a generalized Bernoulli Factory problem [35]. Our model in Figure 1 is for the case where \(N=6\) is fixed. How can we establish that each of the terminal states is indeed reached with probability sufficiently close to or exactly \(\tfrac{1}{6}\)?

The standard approach for answering this question is to solve N linear equation systems, one for each terminal state [5, Ch. 10]. An alternative (and seemingly less well-known) method is to compute the EVTs of each state in the Markov chain, which requires solving just a single linear equation system. All N desired probabilities can then easily be derived from the EVTs [36]: For instance, the states \(s_3,s_4,s_6\) can all be shownFootnote 1 to have EVT \(\tfrac{1}{3}\), and thus the reachability probabilities of the terminal states are all \(\tfrac{1}{6} = \tfrac{1}{2} \cdot \tfrac{1}{3}\). Similarly, EVTs are useful for computing conditional expected rewards. For Bernoulli Factories, this allows us to examine if some outcomes take longer to compute on expectation than others, which is important to analyze possible side channel attacks in a security context. Furthermore, we show in this paper how computing stationary distributions reduces to EVTs. Such distributions provide insight into a system’s long-term behaviour; applications include the mean-payoff of a given policy in a Markov decision process (MDP) [49, Pr. 8.1.1], the distribution computed by a Chemical Reaction Network [12], and the semantics of a probabilistic NetKAT network [56].

Fig. 1.
figure 1

Fast Dice Roller

Contributions. In summary, the contributions of this paper are as follows:

  • We describe, analyze, and implement the first sound numerical approximation algorithm for EVTs in finite discrete- and continuous-time Markov chains. Our algorithm is an adaption of the known Interval Iteration (II) [6, 27, 43].

  • We show that computing (sound bounds on) a Markov chain’s stationary distribution reduces to EVT computations. The resulting algorithm significantly outperforms preexisting techniques [41, 45] for stationary distributions.

  • Similarly, we show how the conditional expected rewards until reaching each of the, say, M absorbing states of a Markov chain can be obtained by computing the EVTs and solving a second linear equation system — this is in contrast to the standard approach which requires solving M equation systems [5, Ch. 10].

  • We implement our algorithm in the probabilistic model checker Storm  [32] and demonstrate its scalability on various benchmarks.

Outline. We define general notation and EVTs in Sections 2 and 3, respectively. In Section 4, we present our sound iterative algorithms for computing EVTs approximately. Sections 5 and 6 present the reductions of stationary distributions and conditional expected rewards to EVTs. We report on the experimental evaluation of our algorithms in Section 7 and summarize related work in Section 8.

2 Background

Let \(\mathbb N\) denote the set of non-negative integers and \(\overline{\mathbb {R}}= \mathbb {R}\cup \{\infty , -\infty \}\) the set of extended real numbers. We equip finite sets \(S \ne \emptyset \) with an arbitrary indexing \(S = \{s_1,\ldots ,s_n\}\) and identify functions of type \(\textbf{v}:S \rightarrow \overline{\mathbb {R}}\) and \(\textbf{A}:{S} \times S' \rightarrow \overline{\mathbb {R}}\) with (column) vectors and matrices , respectively. \(\textbf{I}\) denotes the identity matrix. Vectors are compared component-wise, i.e., \(\textbf{v}\le \textbf{v}'\) iff for all \(s \in S\), \(\textbf{v}(s) \le \textbf{v}'(s)\). Iverson brackets cast the truth value of a Boolean expression B to a numerical value 1 or 0, such that iff B is true.

Definition 1

A discrete-time Markov chain (DTMC) is defined as a triple \(\mathcal {D}= ( S^{{\mathcal {D}}}, \textbf{P}^{{\mathcal {D}}}, \iota _{\text {init}}^{\mathcal {D}} )\), where \(S^{\mathcal {D}}\) is a finite set of states, \(\textbf{P}^{\mathcal {D}}:S^{\mathcal {D}} \times S^{\mathcal {D}} \rightarrow [0,1]\) is the transition probability function satisfying \(\sum _{t \in S} \textbf{P}^{\mathcal {D}}(s, t)=1\) for all \(s \in S\), and \(\iota _{\text {init}}^{\mathcal {D}} :S \rightarrow [0,1]\) is the initial distribution with \(\sum _{s \in S} \iota _{\text {init}}^{\mathcal {D}}(s)=1\).

We often omit the superscript from objects associated with a DTMC \(\mathcal {D}\) whenever this is clear from context, e.g., we write \(\textbf{P}\) rather than \(\textbf{P}^{\mathcal {D}}\). An infinite path \(\pi = s_{0} s_{1} \cdots \in S^\omega \) in a DTMC \(\mathcal {D}= ( S^{{}}, \textbf{P}^{{}}, \iota _{\text {init}}^{} )\) is a sequence of states such that \(\textbf{P}( s_{i}, s_{i+1} ) >0 \) for all \(i \in \mathbb {N}\). We use \(\pi [i]=s_i\) to refer to the i-th state. \(\textsf{Paths}^{\mathcal {D}}\) denotes the set of all infinite paths in \(\mathcal {D}\). The probability measure \(\text {Pr}^{\mathcal {D}}_{}\) over measurable subsets of \(\textsf{Paths}^{\mathcal {D}}\) is obtained by a standard construction: For finite path \(\widehat{\pi }\) we set , where the cylinder set contains all possible infinite continuations of \(\widehat{\pi }\). We write \(\text {Pr}^{\mathcal {D}}_{s}\) for the probability measure induced by \(\mathcal {D}\) with the initial distribution assigning probability 1 to \(s \in S\). We use LTL-style notation for measurable sets of infinite paths. For \(R, T \subseteq S\) and \(k \in \mathbb {N}\), let be the set of infinite paths that visit a state \(s \in T\) in the k-th step while only visiting states in R before. We also define , .

Expected Rewards. A (non-negative) random variable over the probability space induced by \(\mathcal {D}\) is a measurable function \(\textsf{v}:\textsf{Paths}^{\mathcal {D}} \rightarrow \overline{\mathbb {R}}_{\ge 0}\). Its expected value is given by the Lebesgue integral \(\mathbb {E}^{\mathcal {D}}_{}[\textsf{v}] = \int _{\textsf{Paths}^{\mathcal {D}}} \textsf{v}\,d \text {Pr}^{\mathcal {D}}_{}\). We write \(\mathbb {E}^{\mathcal {D}}_s\) for the expectation obtained under \(\text {Pr}^{\mathcal {D}}_{s}\). The total reward w.r.t. a reward structure \(\textsf{rew}:S \rightarrow \mathbb {R}_{\ge 0}\) is defined by the random variable \(\textsf{tr}_{\textsf{rew}} :\textsf{Paths}^{\mathcal {D}} \rightarrow \overline{\mathbb {R}}_{\ge 0}\) with \(\textsf{tr}_{\textsf{rew}}(\pi ) = \sum _{k=0}^{\infty } \textsf{rew}(\pi [k])\); the expected total reward is \(\mathbb {E}^{\mathcal {D}}_{}[\textsf{tr}_{\textsf{rew}}]\).

Fig. 2.
figure 2

Running example DTMC. The individual EVTs are below the states.

Connectivity in DTMCs. A strongly connected component (SCC) of a DTMC \(\mathcal {D}\) is a set of states \(C \subseteq S\) such that any \(s, t \in C\) are mutually reachable, i.e., and , and there is no proper subset of C that satisfies this property. An SCC C is called bottom SCC (BSCC) if no state outside C is reachable from C. In the following, \(\textsf{SCC}^{\mathcal {D}}\) denotes the set of SCCs of \(\mathcal {D}\). We call a DTMC absorbing if all its BSCCs are singleton sets, and irreducible if \(\textsf{SCC}^{\mathcal {D}} = \{S\}\). The SCCs are ordered by a strict partial order \(\hookrightarrow \) based on the topology of the DTMC, where \(C' \hookrightarrow C\) if and only if for some \(s' \in C'\) and \(C \not = C'\). An SCC chain of \(\mathcal {D}\) is a sequence \(\kappa = C_0 \hookrightarrow C_1 \hookrightarrow \dots \hookrightarrow C_k\) of SCCs \(C_0, C_1, \ldots , C_k \in \textsf{SCC}^{\mathcal {D}}\), where \(k \ge 0\). The set of all SCC chains in \(\mathcal {D}\) is denoted by \(\textsf{Chains}^\mathcal {D}\), and \(\textsf{Chains}_{\text {tr}}^\mathcal {D}\) denotes the set of SCC chains that do not contain a BSCC. The length of SCC chain \(\kappa = C_0 \hookrightarrow C_1 \hookrightarrow \dots \hookrightarrow C_k\) is \(|\kappa | = k\).

A state s is called transient if the probability that the DTMC, starting from s, will ever return to s is strictly less than one, otherwise, s is a recurrent state. Thus, in a finite MC, recurrent states are precisely the states contained in the BSCCs whereas the transient states coincide with non-BSCC states. The sets of recurrent and transient states in a DTMC are denoted by \(S_{\text {re}}\) and \(S_{\text {tr}}\), respectively.

Example 1

In the DTMC \(\mathcal {D}\) depicted in Figure 2, states \(s_1,s_2,s_3,s_4\) are transient, and \(s_5, s_6, s_7\) are recurrent. Also, \(\textsf{SCC}^{\mathcal {D}} = \{\{s_1,s_2\}, \{s_3\}, \{s_4\}, \{s_5, s_6\}, \{s_7\}\}\), where only \(\{s_5, s_6\}\) and \(\{s_7\}\) are BSCCs. An example SCC chain is \(\{s_3\} \hookrightarrow \{s_1,s_2\} \hookrightarrow \{s_4\} \hookrightarrow \{s_7\}\); its length is 3.

Stationary distributions. The stationary distribution (also referred to as steady-state or long-run distribution) is a probability distribution that specifies the fraction of time spent in each state in the long run (see, e.g., [5, Def. 10.79]).

Definition 2

The stationary distribution of DTMC \(\mathcal {D}\) is given by \(\theta ^{\mathcal {D}} \in [0,1]^{\vert S \vert }\) with .

If \(\mathcal {D}\) is irreducible, the stationary distribution is given by the unique eigenvector \(\theta ^{\mathcal {D}}\) satisfying \(\theta ^{\mathcal {D}} = \theta ^{\mathcal {D}} \cdot \textbf{P}\) and \(\sum _{s \in S} \theta ^{\mathcal {D}}(s) = 1\), see, e.g., [39, Thm. 4.18]. If \(\mathcal {D}\) is reducible, \(\theta ^{\mathcal {D}}\) can be obtained by combining a reachability analysis and the eigenvector computation for each BSCC individually, see Section 5.

Definition 3

A continuous-time Markov chain (CTMC) is a quadruple \(\mathcal {C}= ( S, \textbf{P}, \iota _{\text {init}}, \boldsymbol{r} )\), where \(( S^{{}}, \textbf{P}^{{}}, \iota _{\text {init}}^{} )\) is a DTMC and \(\boldsymbol{r}:S \rightarrow \mathbb {R}_{>0}\) defines exit rates.

CTMCs extend DTMCs by assigning the rate of an exponentially distributed residence time to each state \(s \in S\). We denote the embedded DTMC of a CTMC \(\mathcal {C}\) by \(\textsf{emb}({\mathcal {C}}) = ( S^{{}}, \textbf{P}^{{}}, \iota _{\text {init}}^{} )\). The semantics are defined in the usual way (see, e.g., [4, 40]). An infinite timed path in a CTMC \(\mathcal {C}= ( S, \textbf{P}, \iota _{\text {init}}, \boldsymbol{r} )\) is a sequence \(\pi = s_0 \overset{\tau _0}{\longrightarrow }\ s_{1} \overset{\tau _1}{\longrightarrow }\cdots \) consisting of states \(s_{0}, s_{1}, \ldots \in S\) and time instances \(\tau _0, \tau _1 \ldots \in \mathbb {R}_{\ge 0}\), such that \(\textbf{P}\bigl (s_{i}, s_{i+1}\bigr ) >0 \) for all \(i \in \mathbb {N}\). We denote the time \(\tau _i\) spent in state \(s_{i}\) by \( time _i(\pi )\). Notations \(\pi [i]\) and \(\textsf{Paths}^{\mathcal {C}}\) are as for DTMCs. The probability measure of \(\mathcal {C}\) over infinite timed paths [4] is denoted by \(\text {Pr}^{\mathcal {C}}_{}\). In a CTMC, the total reward w.r.t. a reward structure \(\textsf{rew}:S \rightarrow \mathbb {R}_{\ge 0}\) is the random variable \(\textsf{tr}_{\textsf{rew}} :\textsf{Paths}^{\mathcal {C}} \rightarrow \overline{\mathbb {R}}_{\ge 0}\) with \(\textsf{tr}_{\textsf{rew}}(\pi ) = \sum _{i=0}^{\infty } \textsf{rew}(\pi [i]) \cdot time _i(\pi )\).

3 Expected Visiting Times

We provide characterizations of expected visiting times for a fixed DTMC \(\mathcal {D}= ( S^{{}}, \textbf{P}^{{}}, \iota _{\text {init}}^{} )\). Omitted proofs are in the extended version of this paper [47].

Definition 4

The expected visiting time (EVT) of a state \(s \in S\) is the expected value \(\mathbb {E}^{\mathcal {D}}_{}[\textsf{vt}_{s}]\) of the random variable \(\textsf{vt}_s\) with .

Example 2

The EVTs of the DTMC from Figure 2 are depicted below its states.

Intuitively, random variable \(\textsf{vt}_s\) counts the number of times state s occurs on an infinite path. Consequently, the EVTs of unreachable states and reachable recurrent states in a DTMC are always 0 and \(\infty \), respectively. For this reason we focus on the EVTs of the transient states \(S_{\text {tr}}\). The following lemma provides an alternative characterization of EVTs in terms of expected total rewards.

Lemma 1

For a fixed \(s \in S\) and \(x \in \mathbb {R}_{>0}\) the reward structure given by satisfies \(\mathbb {E}^{}_{}[\textsf{vt}_s] = \frac{1}{x} \cdot \mathbb {E}^{}_{}[\textsf{tr}_{\textsf{rew}}]\).

By Lemma 1, EVTs can be obtained using existing algorithms for expected total rewards. This approach is, however, inefficient for computing the EVTs of multiple states since it requires solving an equation system for each single state.

Next, we elaborate on EVTs for multiple states as a solution of a single linear equation system. In [36, Def. 3.2.2], EVTs are defined using the so-called fundamental matrix for absorbing DTMCs. The fundamental matrix contains as its coefficients for each possible start and target state s and t the EVT \(\mathbb {E}^{}_{s}[\textsf{vt}_t]\). Computing the fundamental matrix explicitly becomes infeasible for large models as it requires determining the inverse of a \(|S_{\text {tr}}| \times |S_{\text {tr}}|\) matrix. To obtain the vector \((\mathbb {E}^{\mathcal {D}}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}}\) of EVTs that take the initial distribution of \(\mathcal {D}\) into account, it suffices to solve an equation system which is linear in the size of the DTMC. The same equation system arises by applying the dual linear program for expected rewards in MDPs [49, Ch. 7.2.7] to the special case of DTMCs.

Theorem 1

([36, Cor. 3.3.6]). \((\mathbb {E}^{}_{}[\textsf{vt}_{s}])_{s \in S_{\text {tr}}}\) is the unique solution \((\textbf{x}(s))_{s \in S_{\text {tr}}}\) of the following equation system: \(\forall s \in S_{\text {tr}}:\textbf{x}(s) = \iota _{\text {init}}(s)+ \sum _{t \in S_{\text {tr}}} \textbf{P}(t, s) \cdot \textbf{x}(t) \).

Intuitively, this equation system shows that a state s can be visited initially and that it can receive visits from its predecessor states, i.e., the EVT is computed by considering the incoming transitions to a state. As a consequence, we obtain that the EVTs of the transient states are always finite. In particular, if \(s \in S_{\text {tr}}\) is reachable, then \(\mathbb {E}^{}_{}[\textsf{vt}_{s}] \in \mathbb {R}_{>0}\), and otherwise \(\mathbb {E}^{}_{}[\textsf{vt}_{s}] = 0\).

Example 3

Reconsider the DTMC from Figure 2 with transient states \(S_{\text {tr}}= \{s_1,s_2,s_3,s_4\}\). The EVTs of \(S_{\text {tr}}\) are the unique solution \((\textbf{x}(s))_{s \in S_{\text {tr}}}\) of

$$\begin{aligned} \begin{array}{ll} \textbf{x}(s_1) = 0.4 + 1.0\cdot \textbf{x}(s_2) + 0.7\cdot \textbf{x}(s_3) &{}\qquad \quad \textbf{x}(s_2) = 0.5\cdot \textbf{x}(s_1)\\ \textbf{x}(s_4) = 0.5 \cdot \textbf{x}(s_1) + 0.3 \cdot \textbf{x}(s_3) + 0.8 \cdot \textbf{x}(s_4) &{}\qquad \quad \textbf{x}(s_3) = 0.6 \end{array} \end{aligned}$$

Expected visiting times in CTMCs. Following [37], we define the EVT of a state \(s\in S^\mathcal {C}\) of a CTMC \(\mathcal {C}\) as the expected value \(\mathbb {E}^{\mathcal {C}}_{}[\textsf{vt}_s]\) of the random variable \(\textsf{vt}_s :\textsf{Paths}^{\mathcal {C}} \rightarrow \overline{\mathbb {R}}_{\ge 0}\) with . Intuitively, \(\textsf{vt}_s\) considers the total time the system spends in state s. Computing EVTs in CTMCs reduces to the discrete-time case: The EVT of state s coincides with the EVT in the embedded DTMC weighted by the expected residence time \(\frac{1}{\boldsymbol{r}(s)}\) in s:

Theorem 2

For all states \(s \in S\), it holds that \(\mathbb {E}^{\mathcal {C}}_{}[\textsf{vt}_s] = \frac{1}{\boldsymbol{r}(s)} \cdot \mathbb {E}^{\textsf{emb}(\mathcal {C})}_{}[\textsf{vt}_s]\).

Theorem 2 implies that all results and algorithms to compute ETVs in DTMCs are readily applicable to CTMCs, too. We thus focus on DTMCs in the remainder.

4 Accurately Computing EVTs

In this section, we discuss algorithms to compute EVTs approximately: An unsound value iteration algorithm (Section 4.1), its sound interval iteration extension (Section 4.2), and finally a topological, i.e., SCC-wise algorithm (Section 4.3). Since the EVTs for recurrent states are always either 0 or \(\infty \), we focus on the EVTs of the transient states. Omitted proofs are in the extended version [47].

4.1 Value Iteration

Value Iteration (VI) was originally introduced to approximate expected rewards in MDPs [7]. In a broader sense, VI simply refers to iterating a function \(f :\mathbb {R}^{|S|} \rightarrow \mathbb {R}^{|S|}\) (called Bellman operator in the MDP setting) from some given initial vector \(\textbf{x}^{(0)}\), i.e., to compute the sequence \(\textbf{x}^{(1)} = f(\textbf{x}^{(0)}), \textbf{x}^{(2)} = f(\textbf{x}^{(1)})\), etc. Instances of VI are usually set up such that the sequence converges to a (generally non-unique) fixed point \(\textbf{x}= f(\textbf{x})\). In this paper, we only consider VI for the case where f is a linear function \(f(\textbf{x}) = \textbf{A}\textbf{x}+ \textbf{b}\), where \(\textbf{A}\) and \(\textbf{b}\) are a matrix and a vector, respectively. A fixed point \(\textbf{x}\) of f is then a solution of the linear equation system \((\textbf{I}- \textbf{A}) \textbf{x}= \textbf{b}\). Other iterative methods for solving linear equation systems such as the Jacobi or Gauss-Seidel method can be considered optimized variants of VI, and are applicable in our setting as well, see [47].

Value iteration for EVTs. For EVTs, the function iterated during VI is as follows:

Definition 5

The EVTs-operator \(\varPhi : \mathbb {R}^{|S_{\text {tr}} |} \rightarrow \mathbb {R}^{|S_{\text {tr}} |}\) for DTMC \(\mathcal {D}\) is defined as

$$\begin{aligned} \varPhi (\textbf{x}) = \Bigl ({\iota _{\text {init}}(s) + \sum _{t \in S_{\text {tr}}} \textbf{x}(t) \cdot \textbf{P}(t, s)} \Bigr )_{s \in S_{\text {tr}}} ~. \end{aligned}$$

The above definition is motivated by Theorem 1. The following result, which is analogous to [49, Thm. 6.3.1], means that VI for EVTs (stated explicitly as Algorithm 1 for the sake of concreteness) works for arbitrary initial vectors.

Theorem 3

The EVTs-operator from Definition 5 has the following properties:

(i):

\((\mathbb {E}^{}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}}\) is the unique fixed point of \(\varPhi \).

(ii):

For all \(\textbf{x}^{(0)} \in \mathbb {R}^{|S_{\text {tr}} |}\) we have \(\lim _{k \rightarrow \infty } \varPhi ^{(k)}(\textbf{x}^{(0)})= (\mathbb {E}^{}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}}\).

figure u

When to stop VI? A general issue with value iteration is that even if the generated sequence converges to the desired fixed point in the limit, it is not easy to determine how many iterations are necessary to obtain an \(\epsilon \)-precise result. An ad hoc solution, which is implemented in probabilistic model checkers such as Storm  [32], prism  [41], and mcsta  [29], is to stop the iteration once the difference between two consecutive approximations is small, i.e., the number of iterations is the smallest \(k > 0\) such that \(\textsf{diff}^{}(\textbf{x}^{(k)}, \textbf{x}^{(k-1)}) < \epsilon \) for some predefined fixed \(\epsilon > 0\). Common choices for the distance \(\textsf{diff}^{}\) between vectors \(\textbf{x},\textbf{y}\in \mathbb {R}^{|S|}\) are the absolute difference, and the relative difference

figure w

where by convention \(0/0 = 0\) and \(a/0 = \infty \) for \(a \ne 0\). As pointed out by various authors [6, 27, 43, 50, 59], there exist instances where the iteration terminates with a result which vastly differs from the true fixed point, even if \(\epsilon \) is small (e.g. \(\epsilon =10^{-6}\)). An example of this for the EVT variant of VI is given in [47].

4.2 Interval Iteration

Interval iteration (II) [6, 27, 43] is an extension of VI that formally guarantees \(\epsilon \)-close results for all possible inputs. The general idea of II is to construct two sequences of vectors \((\textbf{l}^{(k)})_{k \in \mathbb {N}}\) and \((\textbf{u}^{(k)})_{k \in \mathbb {N}}\) such that for all \(k \in \mathbb {N}\) we have \(\textbf{l}^{(k)} \le \textbf{x}\le \textbf{u}^{(k)}\), where \(\textbf{x}\) is the desired fixed point solution. II can be stopped with precision guarantee \(\epsilon \) once it detects that \(\textsf{diff}^{}(\textbf{l}^{(k)}, \textbf{u}^{(k)}) \le \epsilon \).

Initial bounds for II. In general, II requires initial vectors \(\textbf{l}^{(0)}\) and \(\textbf{u}^{(0)}\) which are already sound (but perhaps very crude) lower and upper bounds on the solution. In the case of EVTs, we can use \(\textbf{l}^{0} = \textbf{0}\). Finding an upper bound \(\textbf{u}^{(0)} \ge (\mathbb {E}^{}_{}[\textsf{vt}_{s}])_{s \in S_{\text {tr}}}\) is more involved since EVTs may be unboundedly large in general. We solve this issue using a technique from [6].

II for EVTs. In Lemma 2 below we show that once we have found initial bounds \(\textbf{l}^{(0)}\) and \(\textbf{u}^{(0)}\), we can readily perform a sound II for EVTs by simply iterating the operator \(\varPhi \) from Definition 5 on \(\textbf{l}^{(0)}\) and \(\textbf{u}^{(0)}\) in parallel. Inspired by [6], we propose the following optimization to speed up convergence: Whenever \(\varPhi \) decreases the current lower bound in some entries, we retain the old values for these entries (and similar for upper bounds). The next definition formalizes this.

Definition 6

The Max and Min EVTs-operators \({\varPhi }_{max}, {\varPhi }_{min}: \mathbb {R}^{|S_{\text {tr}} |} \rightarrow \mathbb {R}^{|S_{\text {tr}} |}\) are defined by \({\varPhi }_{max}(\textbf{x}) = \textsf{max}\left\{ \textbf{x}, \varPhi (\textbf{x})\right\} = (\textsf{max}\{\textbf{x}(s) , (\varPhi (\textbf{x}))(s)\})_{s\in S_{\text {tr}}}\) and \({\varPhi }_{min}(\textbf{x}) = \textsf{min}\{\textbf{x}, \varPhi (\textbf{x})\} = (\textsf{min}\{ \textbf{x}(s) , (\varPhi (\textbf{x}))(s)\})_{s\in S_{\text {tr}}}\).

The following result is analogous to [6, Lem. 3.3]:

Lemma 2

Let \(\textbf{u},\textbf{l}\in \mathbb {R}^{|S_{\text {tr}} |}\) with \(\textbf{l}\le (\mathbb {E}^{}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}} \le \textbf{u}\). Then,

(i):

\({\varPhi }_{max}^{(k)}(\textbf{l}) \le {\varPhi }_{max}^{(k+1)}(\textbf{l}) \quad \text {and} \quad {\varPhi }_{min}^{(k)}(\textbf{u}) \ge {\varPhi }_{min}^{(k+1)}(\textbf{u})\) for all \(k \in \mathbb {N}\).

(ii):

\(\varPhi ^{(k)}(\textbf{l}) \le {\varPhi }_{max}^{(k)}(\textbf{l}) \le \left( \mathbb {E}^{}_{}[\textsf{vt}_s]\right) _{s\in S_{\text {tr}}} \le {\varPhi }_{min}^{(k)}(\textbf{u}) \le \varPhi ^{(k)}(\textbf{u})\) for all \(k \in \mathbb {N}\).

(iii):

\( \lim _{k\rightarrow \infty } {\varPhi }_{max}^{(k)}(\textbf{l}) = \lim _{k \rightarrow \infty } {\varPhi }_{min}^{(k)}(\textbf{u}) = \left( \mathbb {E}^{}_{}[\textsf{vt}_s]\right) _{s\in S_{\text {tr}}}\).

The resulting II algorithm for EVTs is presented as Algorithm 2. Note the following additional optimization: The algorithm stops as soon as \(\textsf{diff}^{crit}(\textbf{u}^{(k)}, \textbf{l}^{(k)}) \le 2 \epsilon \) and returns the mean of \(\textbf{u}^{(k)}\) and \(\textbf{l}^{(k)}\), ensuring that the absolute or relative difference between \((\mathbb {E}^{}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}}\) and the returned result is at most \(\epsilon \).

Example 4

We illustrate a run of Algorithm 2 on the DTMC from Figure 2, with \(crit = abs\) and \(\epsilon =0.05\) (the following numbers are rounded to 4 decimal digits):

k

\(\textbf{l}^{(k)}\)

\(\textbf{u}^{(k)}\)

\(\textsf{diff}^{abs}(\textbf{l}^{(k)}, \textbf{u}^{(k)})\)

0

(0.000, 0.000, 0.000, 0.000)

(2.000, 2.000, 1.000, 5.000)

5.000

1

(0.400, 0.000, 0.600, 0.000)

(2.000, 1.000, 0.600, 5.000)

5.000

2

(0.820, 0.200, 0.600, 0.380)

(1.820, 1.000, 0.600, 5.000)

4.602

 

\(\cdots \)

\(\cdots \)

\(\cdots \)

22

(1.639, 0.819, 0.600, 4.899)

(1.640, 0.820, 0.600, 5.000)

0.101

23

(1.639, 0.819, 0.600, 4.919)

(1.640, 0.820, 0.600, 5.000)

\(\mathbf {0.081}\)

After \(k=23\) iterations, the algorithm stops as \(\textsf{diff}^{abs}(\textbf{l}^{(k)}, \textbf{u}^{(k)}) = 0.081 \le 2 \cdot \epsilon \) and outputs the mean \(\tfrac{1}{2}(\textbf{l}^{(23)} + \textbf{u}^{(23)})\).

figure x

Theorem 4

(Correctness of Algorithm 2). Given an input DTMC \(\mathcal {D}\), initial vectors \(\textbf{l}^{(0)}\), \(\textbf{u}^{(0)}\) with \(\textbf{l}^{(0)} \le (\mathbb {E}^{}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}} \le \textbf{u}^{(0)}\), \(crit\in \{abs,rel\}\), and a threshold \(\epsilon >0\), Algorithm 2 terminates and returns a vector \(\textbf{x}^{res} \in \mathbb {R}^{|S_{\text {tr}} |}\) satisfying \(\textsf{diff}^{crit} (\textbf{x}^{res},(\mathbb {E}^{}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}}) \le \epsilon \).

Remark 1

The monotonicity of the sequences \(({\varPhi }_{max}^{(k)}(\textbf{l}))_{k \in \mathbb {N}}\) and \(({\varPhi }_{min}^{(k)}(\textbf{u}))_{k \in \mathbb {N}}\) (see Lemma 2 (i)) is not used in the proof of Theorem 4. By Lemma 2 (ii), we can replace \({\varPhi }_{max}\) and \({\varPhi }_{min}\) with \(\varPhi \) in Algorithm 2 and still obtain \(\epsilon \)-sound results. However, using \({\varPhi }_{max}\) and \({\varPhi }_{min}\) instead of \(\varPhi \) can lead to faster convergence.

4.3 Topological Algorithm

To increase the efficiency of VI for the analysis of rewards and probabilities in MDPs, several authors have proposed topological VI [15, 16], which is also known as blockwise VI [14]. The idea is to avoid the analysis of the complete model at once and instead consider the strongly connected components (SCCs) sequentially based on the order relation \(\hookrightarrow \). We lift this approach to EVTs and in particular consider error propagations when approximative methods are used.

SCC Restrictions. To formalize the topological approach, we introduce the SCC restriction \(\mathcal {D}{\vert }_{C}[\textbf{x}]\) of DTMC \(\mathcal {D}\) to \(C \in \textsf{SCC}^{\mathcal {D}}\) with parameters \(\textbf{x}\in \mathbb {R}^{|S_{\text {tr}}|}\). Intuitively, \(\mathcal {D}{\vert }_{C}[\textbf{x}]\) is a DTMC-like model obtained by restricting \(\mathcal {D}\) to the states C and assigning each \(s \in C\) the “initial value” \(\iota _{\text {init}}^{\mathcal {D}}(s) + \sum _{s' \in S\setminus C} \textbf{P}^{\mathcal {D}}(s',s) \cdot \textbf{x}(s')\). The idea is that \(\textbf{x}\) is an approximation of the EVTs of the predecessor SCCs \(C' \hookrightarrow C\). We also define \(\mathcal {D}{\vert }_{C} = \mathcal {D}{\vert }_{C}[\textbf{x}]\) with \(\textbf{x}(s) = \mathbb {E}^{\mathcal {D}}_{}[\textsf{vt}_s]\) (i.e., the exact EVT) for each state \(s \in S\) that can reach C with positive probability in one step. See [47, Definition 7] for more formal details.

Fig. 3.
figure 3

SCC restriction.

Example 5

The SCC restriction of the DTMC \(\mathcal {D}\) from Figure 2 to the SCC \(C = \{s_1,s_2\}\) with parameters \(\textbf{x}= (\mathbb {E}^{\mathcal {D}}_{}[\textsf{vt}_s])_{s \in S}\) is depicted in Figure 3. Note that the initial values depend only on the initial distribution \(\iota _{\text {init}}^\mathcal {D}\) and the \(\textbf{x}\)-values of the states that can reach the SCC C in one step.

Lemma 3

For non-bottom SCC C and \(s \in C\) we have \(\mathbb {E}^{\mathcal {D}}_{}[\textsf{vt}_s] = \mathbb {E}^{\mathcal {D}{\vert }_{C}}_{}[\textsf{vt}_s]\).

Remark 2

Since a parametric SCC restriction \(\mathcal {D}{\vert }_{C}[\textbf{x}]\) is defined for arbitrary vectors \(\textbf{x}\in \mathbb {R}^{|S_{\text {tr}} |}\), the initial values do not necessarily form a probability distribution. Strictly speaking, this means that \(\mathcal {D}{\vert }_{C}[\textbf{x}]\) is not a DTMC in general. Thus, by abuse of notation, we define the “EVTs” in the parametric SCC restriction \(\mathcal {D}{\vert }_{C}[\textbf{x}]\) as the unique solution of the following linear equation system: For all \(s \in C\): \(\textbf{x}(s) = \iota _{\text {init}}^{\mathcal {D}{\vert }_{C}[\textbf{x}]}(s)+ \sum _{t \in S_{\text {tr}}} \textbf{P}^{\mathcal {D}{\vert }_{C}}(t, s) \cdot \textbf{x}(t)\). To solve this system, we can still apply the methods described in Sections 4.1 and 4.2 without further ado.

figure y

We now describe an algorithm for computing the EVTs SCC-wise in topological order. The desired precision \(\epsilon \ge 0\) is an input parameter (\(\epsilon = 0\) is possible). Due to space limitations we only discuss relative precision, see [47] for an algorithm with absolute precision. The idea is to solve the linear equation systems tailored to the parametric SCC restrictions, each of which is constructed based on the analysis of the preceding SCCs. Algorithm 3 outlines the procedure.

The algorithm first decomposes the input DTMC \(\mathcal {D}\) into its non-bottom SCCs: The function \(\textsf{SCC}^{\mathcal {D}}_{\text {tr}}\) called in Line 1 returns the set \(\{C_1,\dots ,C_n\}\) of SCCs of \(\mathcal {D}\) which consist of transient states, i.e., the non-bottom SCCs. Furthermore, we assume that the SCCs are indexed such that \(C_i \hookrightarrow C_j\) implies \(i<j\). The algorithm considers each SCC in topological order. We assume that a “black box” can obtain the (approximate) EVTs in Line 5. Then, the vector \(\textbf{x}\in \mathbb {R}^{|S_{\text {tr}} |}\) is updated such that it contains the approximations of the EVTs in \(\mathcal {D}\) upon termination. For the analysis of an SCC \(C_j\) for some \(j \in \{1, \dots ,n\}\), the algorithm considers the parametric SCC restriction \(\mathcal {D}{\vert }_{C_j}[\textbf{x}]\), which is based on the result \(\textbf{x}(t)\) for states t in SCCs \(C_i\) that are topologically before \(C_j\). For each parametric SCC restriction \(\mathcal {D}{\vert }_{C_j}[\textbf{x}]\), the algorithm computes the vector \((\mathbb {E}^{\mathcal {D}{\vert }_{C_j}[\textbf{x}]}_{}[\textsf{vt}_s])_{s \in C}\) in Line 5. Then, the algorithm updates the corresponding entries in \(\textbf{x}\) in Line 6. After each non-bottom SCC has been considered, the algorithm terminates and returns the vector \(\textbf{x}\) that contains the updated value for each transient state. The following lemma provides an upper bound on the error that accumulates during the topological computation.

Lemma 4

Let \(\epsilon \in [0,1)\) and let \(\textbf{x}\in \mathbb {R}^{|S_{\text {tr}} |}\) such that for every non-bottom SCC C, \((\textbf{x}(s))_{s\in C}\) satisfies \(\textsf{diff}^{rel}\left( (\textbf{x}(s))_{s\in C}, (\mathbb {E}^{\mathcal {D}{\vert }_{C}[\textbf{x}]}_{}[\textsf{vt}_s])_{s \in C} \right) \le \epsilon \). Then

$$\begin{aligned} \textsf{diff}^{rel}\left( \textbf{x}, (\mathbb {E}^{\mathcal {D}}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}} \right) \le (1+\epsilon )^{L+1} -1, \end{aligned}$$

where is the largest length of a chain of non-bottom SCCs.

Theorem 5

(Correctness of Algorithm 3). Algorithm 3 returns a vector \(\textbf{x}^{res} \in \mathbb {R}^{|S_{\text {tr}} |}\) such that \(\textsf{diff}^{rel}(\textbf{x}^{res}, (\mathbb {E}^{\mathcal {D}}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}}) \le \epsilon \).

5 Stationary Distributions via EVTs

We show that EVTs can be used to determine sound approximations of the stationary distribution \(\theta ^{\mathcal {D}}\) of a (reducible) DTMC \(\mathcal {D}\). It is known that \(\theta ^{\mathcal {D}}\) can be computed as follows (see, e.g., [39, Thm. 4.23]). For each BSCC B:

  • Compute the reachability probability to B.

  • Determine the stationary distribution \(\theta ^{\mathcal {D}{\vert }_{B}}\) of the DTMC restricted to B.

Then, we obtain the stationary distribution for the recurrent states s in the BSCC B: . For the remaining transient states, we have \(\theta ^{\mathcal {D}}(s) = 0\). We show how both, , can be computed efficiently for every BSCC B using EVTs. We also elaborate on how relative errors propagate through the computation, allowing us to derive sound lower- and upper bounds for the stationary distribution \(\theta ^{\mathcal {D}}\). Omitted proofs are in the extended report [47].

Computing BSCC reachability probabilities. The absorption probabilities, i.e., the probability of reaching a singleton BSCC can be computed using EVTs [36]. By collapsing the BSCCs into a single state, a slightly generalised result is obtained:

Theorem 6

([36, Thm. 3.3.7]). For any BSCC B of a DTMC \(\mathcal {D}\) it holds that .

Applying Theorem 6, we can compute the EVTs once to derive the reachability probabilities for every BSCC. Further, when using interval iteration from Section 4.2 to obtain \(\textbf{x}\in \mathbb {R}^{|S |}\) with \(\textsf{diff}^{rel}\left( \textbf{x}, (\mathbb {E}^{}_{}[\textsf{vt}_s])_{s \in S} \right) \le \epsilon \) for some \(\epsilon \in (0,1)\), the relative error does not increase when deriving the reachability probabilities, i.e.,

Computing the stationary distribution within a BSCC. Next, we leverage EVTs to compute the stationary distribution of an irreducible DTMC \(\mathcal B = (S, \textbf{P}, \iota _{\text {init}})\). This method can be applied to derive the stationary distribution \(\theta ^{\mathcal {D}{\vert }_{B}}\) of \(\mathcal {D}{\vert }_{B}\) since the latter is an irreducible DTMC for each BSCC B. Let \(v \in S\) be an arbitrary state. We construct the DTMC \({\mathcal {B}{\Rsh }^{\hat{v}}{}}\) in which all incoming transitions of state v are redirected to a fresh absorbing state \(\hat{v}\). Thus, its only BSCC is \(\{\hat{v}\}\), all other states are transient. Formally, \({\mathcal {B}{\Rsh }^{\hat{v}}{}} = (S \uplus \{\hat{v}\}, \hat{\textbf{P}}, \hat{\iota }_{\text {init}})\), where \(\hat{\textbf{P}}(\hat{v}, \hat{v})=1\), \(\hat{\textbf{P}}(s,\hat{v}) = \textbf{P}(s,v)\) and \(\hat{\textbf{P}}(s,v) = 0\) for all \(s \in S\), \(\hat{\textbf{P}}(s,t) = \textbf{P}(s,t)\) for all \(s \in S, t \in S \setminus \{v\}\), and \(\hat{\iota }_{\text {init}}(v)= 1\).

Theorem 7

The stationary distribution of an irreducible DTMC \(\mathcal {B}\) is given by . Further, if \(\textsf{diff}^{rel}\left( \textbf{x}, (\mathbb {E}^{{\mathcal {B}{\Rsh }^{\hat{v}}{}}}_{}[\textsf{vt}_s])_{s \in S} \right) \le \epsilon \) for \(\textbf{x}\in \mathbb {R}^{|S |}\) and \(\epsilon \in (0,1)\), then \( \textsf{diff}^{rel}\left( \tfrac{\textbf{x}}{\sum _{s\in S} \textbf{x}(s)}, \theta ^{\mathcal {B}}\right) \le \frac{2\epsilon }{1-\epsilon } \).

The first part of Theorem 7 can also be established by considering the renewal processes embedded in \(\mathcal {B}\) (see, e.g., [58, Theorem 2.2.3]).

Combining both steps. Theorems 6 and 7 and the interval iteration method yield approximations \(p_B\) and \(d_s\) for and \(\theta ^{\mathcal {D}{\vert }_{B}}(s)\), respectively, where B is a BSCC \(s \in B\), and \(\epsilon _1, \epsilon _2 \in (0,1)\) such that and \(\textsf{diff}^{rel}\left( d_s, \theta ^{\mathcal {D}{\vert }_{B}}(s)\right) \le \epsilon _2\). The product \(p_B \cdot d_s\) approximates such that \(\textsf{diff}^{rel}\left( p_B \cdot d_s, \theta ^{\mathcal {D}}(s)\right) \le \epsilon _1 + \epsilon _2 + \epsilon _1 \epsilon _2\).

Fig. 4.
figure 4

The DTMC \(\mathcal {B}{\Rsh }^{\hat{s_6}}\).

Example 6

We compute the stationary distribution of the running example DTMC \(\mathcal {D}\) from Figure 2. Its only non-trivial BSCC is . The DTMC \(\mathcal {B}{\Rsh }^{\hat{s_6}}\) along with its (exact) EVTs is depicted in Figure 4. We conclude that is proportional to , i.e., . Since the two BSCCs and \(\{s_7\}\) are both reached with probability \(\tfrac{1}{2}\), it follows that the stationary probabilities of the three recurrent states \(s_5, s_6, s_7\) are \(\tfrac{5}{16}, \tfrac{3}{16}\), and \(\tfrac{1}{2}\), respectively.

6 Conditional Expected Rewards

Theorem 6 states that the EVTs of the transient states of a DTMC \(\mathcal {D}\) can be used to compute the probability to reach each individual BSCC of \(\mathcal {D}\). We now generalize this result and show that the EVTs can also be used to compute the total expected rewards conditioned on reaching each BSCC.

The total expected reward conditioned on reaching a set \(T \subseteq S\) of states with is defined as:

figure ar

Our next result asserts that given the EVTs of \(\mathcal {D}\), all the values can be computed by solving a single linear equation system (the standard approach is to solve one linear equation system per BSCC [5, Ch. 10]). For simplicity, we state the result only for BSCCs \(\{r\}\) with a single absorbing state r, and for reward functions that assign zero reward to all recurrent (BSCC) states; this is w.l.o.g. as larger BSCCs can be collapsed, and positive reward in a (reachable) BSCC causes the conditional expected reward w.r.t. this BSCC to be \(\infty \), rendering numeric computations unnecessary.

Theorem 8

Let \(\textsf{rew}:S \rightarrow \mathbb {Q}_{\ge 0}\) with \(\textsf{rew}(S_{\text {re}}) = \{0\}\). Then the equation system

$$\begin{aligned} \forall s \in S_{\text {tr}}:\qquad \textbf{y}(s) = \textsf{rew}(s) \cdot \mathbb {E}^{\mathcal {D}}_{}[\textsf{vt}_s] + \sum _{t \in S_{\text {tr}}} \textbf{P}(t, s) \cdot \textbf{y}(t) \end{aligned}$$

has a unique solution \((\textbf{y}(s))_{s \in S_{\text {tr}}}\) and for all absorbing \(r \in S\), ,

figure au

Theorem 8 assumes rational rewards as required in our proof in [47].

7 Experimental Evaluation

Implementation details. We integrated the presented algorithms for EVTs and stationary distributions in the model checker Storm  [32]. The implementation is part of Storm ’s main release available at https://stormchecker.org. It uses explicit data structures such as sparse matrices and vectors. When computing EVTs, we can use value iteration (VI) or interval iteration (II) as presented in Algorithms 1 and 2. Alternatively, the corresponding linear equation systems can be solved using LU factorization (a direct method implemented in the Eigen library [25]) or gmres (a numerical method implemented in gmm++  [51]). Each EVT approach can be used in combination with the topological algorithm (topo) from Section 4.3. We use double precision floating point numbers. For II, the propagation of (relative) errors is respected in a way that the error of the end result does not exceed a user-defined threshold (here: \(\epsilon = 10^{-3}\)). Implementing II with safe rounding modes as in [28] is left for future work. The methods gmres and VI are configured with a fixed relative precision parameter (here: \(\epsilon = 10^{-6}\)). For LU, floating point errors are the only source of inaccuracies. We also consider an exact configuration \(\texttt {LU}^X\) that uses rational arithmetic instead of floats.

Stationary distributions can be computed in Storm using the approaches Classic, EVTreach, and EVTfull. The Classic approach computes each BSCC reachability probability separately and the stationary distributions within the BSCCs are computed using the standard equation system [39, Thm. 4.18]. EVTreach and EVTfull implement our approaches from Section 5, where EVTreach only considers EVTs for BSCC reachability and EVTfull also derives the BSCC distributions from EVTs. As for EVTs, we use \(\texttt {LU}^{(X)}\) or gmres to solve linear equation systems. For the BSCC reachability probabilities, a topological algorithm can be enabled as well. Using EVTfull with II yields sound approximations.

Experimental setup. The experiments ran on an Intel® Xeon® Platinum 8160 Processor limited to 4 cores and 12 GB of memory. The time timeout was set to 30 min. Our implementation does not use multi-threading.

Fig. 5.
figure 5

Fast Dice Roller Results.

7.1 Verifying the Fast Dice Roller

Recall Lumbroso’s Fast Dice Roller [42] from Section 1. For a given parameter \(N\ge 1\), we verify that the resulting distribution is indeed uniform by computing the stationary distribution of the corresponding DTMC which, for this model family, coincides with the individual BSCC reachability probabilities as each BSCC consists of a single state. We conducted our experiments with an equivalent state-reduced variant of the Fast Dice Roller which we obtained automatically using the technique from [60], i.e., for every given N, our variant has fewer states than the original algorithm from [42]. The plot in Figure 5 shows for different values of N (x-axis) the runtime (y-axis) of the approaches Classic (using gmres or \(\texttt {LU}^x\)) and EVTfull (using II or \(\texttt {LU}^x\)), all using topological algorithms. Our novel EVTfull approach is significantly faster than the Classic method, enabling us to verify large instances with up to 4 800 255 states (\(N=32\,000\)) within the time limit. In particular, we can compute the values using exact arithmetic as \(\texttt {LU}^x\) has runtimes similar to those of II in the EVTfull approach.

7.2 Performance Comparison

To evaluate the various approaches, we computed EVTs and stationary distributions for all applicable finite models of type DTMC or CTMC of the Quantitative Verification Benchmark Set (QVBS) [31]: We excluded 2 model families for which none of the tested parameter valuations allowed any algorithm to complete using the available resources and for the EVT computation we excluded 8 models that do not contain any transient states. In addition to QVBS, we included Lumbroso’s Fast Dice Roller [42] and the handcrafted models (branch and loop) introduced in [45]. We considered multiple parameter valuations yielding a total of 62 instances (including 12 CTMCs) for computing EVTs and 79 instances (including 28 CTMCs) for computing the stationary distribution.

Our experiments for stationary distributions also include the implementations of the naive and guided sampling approaches (ap-naive and ap-sample) from [45] as well as the implementation of the classic approach in prism  [41]Footnote 2 as external baselines. These tools do not support the Jani models and can not compute EVTs.

We measured the runtime of the respective computation including the time for model construction. In cases where the exact results are known (using exact computations via \(\texttt {LU}^X\)), we consider results as incorrect if the relative difference to the exact value is greater than \(10^{-3}\). Results provided by the tool from [45] and by prism are not checked for correctness. We set the relative termination threshold of gmres and VI to \(\epsilon = 10^{-6}\) to compensate for inaccuracies of the unsound methods. When using II or prism, a relative precision of \(\epsilon = 10^{-3}\) was requested. For the implementation of [45] — which exclusively support absolute precision — we set the threshold to \(\epsilon = 10^{-3}\). See [47] for more experiments.

Fig. 6.
figure 6

Runtime comparison for EVTs (top) and stationary distributions (bottom).

Fig. 7.
figure 7

Scatter plots showing the EVT computation runtime for standard and topological II (left) as well as the runtime of the EVT (right) approaches for different state space sizes |S|.

Computing EVTs. The quantile plot at the top of Figure 6 indicates the time that is required for computing the EVTs in 62 models for different approaches. A point at position (xy) indicates that the corresponding method solved the \(x^{th}\) fastest instance in y seconds, where only correctly solved instances are considered. The unsound methods VI and gmres produced 7 and 2 incorrect results, respectively. Furthermore, as errors accumulated, the topological variations of VI and gmres more frequently exceeded the threshold of \(10^{-3}\). The variants of II always produced correct results.

The plot indicates that (topological) LU and gmres outperform VI and II for easier instances. However, II catches up for the more intricate instances as it is more scalable than LU and always yields correct results. The exact method \(\texttt {LU}^X\) is significantly slower compared to the other methods. We also observe that the topological algorithms are superior to the non-topological variants. This is confirmed by the leftmost scatter plot in Figure 7 which compares the topological and non-topological variant of II. Here, each point (xy) indicates an instance for which the methods on the x-axis and the y-axis respectively required x and y seconds to compute the EVTs. Instances that contain only singleton SCCs are depicted as triangles (), whereas cycles () represent the remaining instances. No incorrect results (INC) were obtained. The scatter plot in the middle in Figure 7 indicates that the iterative methods are more scalable than exact \(\texttt {LU}^X\). We also see that models with millions of states can be solved in reasonable time.

Computing stationary distributions. The quantile plot at the bottom of Figure 6 summarizes the runtimes for the different stationary distribution approachesFootnote 3. We only consider the topological variants of the approaches of Storm as they were consistently faster. No incorrect results were observed in this experiment.

The plot indicates that the guided sampling method (ap-sample) from [45] and prism perform significantly better than ap-naive. However, all algorithms provided by Storm — except for \(\texttt {Classic/LU}^x\) — outperform the other implementations. For LU and GMRES, we observe that the EVTreach and EVTfull variants are significantly faster than the Classic approach. The EVTreach approach using gmres provides the fastest configuration, but is also the least reliable one in terms of accuracy. For the sound methods, we observe that the EVTfull approach with II is outperformed by LU combined with either EVTreach or EVTfull, where the latter shows the better performance.

8 Related Work

Computing stationary distributions. Other methods for the sound computation of the stationary distribution have been proposed in, e.g., [9, 11, 21]. In contrast to our work, they consider only subclasses of Markov chains: [11, 21] introduce an (iterative) algorithm applicable to Markov chains with positive rows while [9] presents a technique limited to time-reversible Markov chains. The recent approach from [45] can also handle general Markov chains. Our technique ensures soundness with respect to both absolute and relative differences, whereas the approaches of [45] only consider absolute precision.

Other applications of EVTs. The authors of [36] have suggested to use EVTs for the expected time to absorption, i.e., the expected number of steps until the chain reaches an absorbing state. Indeed, this quantity is given by the sum of the vector \((\mathbb {E}^{}_{}[\textsf{vt}_s])_{s \in S_{\text {tr}}}\) [36, Thm. 3.3.5]. For acyclic DTMCs the EVT of a state coincides with the probability of reaching this state. This is relevant in the context of Bayesian networks [53, 54] since inference queries in the network can be reduced to reachability queries by translating Bayesian networks into tree-like DTMCs. Existing procedures for multi-objective model checking of MDPs employ linear programming methods relying on the EVTs of state-action pairs [13, 17, 18, 20]. EVTs are also employed in an algorithm proposed in [8] for LTL model checking of interval Markov chains. Moreover, EVTs have been leveraged for minimizing and learning DTMCs [1, 2]. Further recent applications of EVTs to MDPs include verifying cause-effect dependencies [3], as well as an abstraction-refinement procedure that measures the importance of states based on the EVTs under a fixed policy [33]. [22] employs EVTs in the context of policy iteration in reward-robust MDPs.

9 Conclusion

We elaborated on the computation of EVTs in DTMCs and CTMCs: The EVTs in DTMCs can be determined by solving a linear equation system, while computing EVTs in CTMCs reduces to the discrete-time setting. We developed an iterative algorithm based on the value iteration [49] algorithm lacking assurance of precision. Building on interval iteration [6, 27] — an algorithm for the sound computation of reachability probabilities and expected rewards — we developed an algorithm for approximating EVTs with accuracy guarantees. To enhance efficiency, we adapted a topological algorithm [14,15,16] to compute EVTs SCC-wise in topological order. We showed that EVTs enable the sound approximation of the stationary distribution and the efficient computation of conditional expected rewards. For future work, we want to extend our implementation provided in the model checker Storm  [32] by symbolic computations. Another direction is to combine EVT-based computations with approximate verification approaches based on partially exploring relevant parts of the system [38, 45]. We conjecture that EVTs serve as a good heuristic to identify significant sub-regions within the state space.