Abstract
Essential tasks for the verification of probabilistic programs include bounding expected outcomes and proving termination in finite expected runtime. We contribute a simple yet effective inductive synthesis approach for proving such quantitative reachability properties by generating inductive invariants on sourcecode level. Our implementation shows promise: It finds invariants for (in)finitestate programs, can beat stateoftheart probabilistic model checkers, and is competitive with modern tools dedicated to invariant synthesis and expected runtime reasoning.
This research was funded by the ERC AdG FRAPPANT under grant No. 787914.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Reasoning about reachability probabilities is a foundational task in the analysis of randomized systems. Such systems are (possibly infinitestate) Markov chains, which are typically described as probabilistic programs – imperative programs that may sample from probability distributions. We contribute a method for proving bounds on quantitative properties of probabilistic programs, which finds inductive invariants on sourcecode level by inductive synthesis. We discuss each of these ingredients below, present our approach with a running example in Sect. 2, and defer a detailed discussion of related work to Sect. 8.
1) Quantitative Reachability Properties. We aim to verify properties such as “is the probability of reaching an error at most 1%?” More generally, our technique proves bounds on the expected value of a probabilistic program terminating in designated states (see Sect. 2.1). Various verification problems are ultimately solved by bounding quantitative reachability properties (cf. [7, 47]). Further examples of such problems include “does a program terminate with finite expected runtime?” and “is the expected sum of program variables x and y at least one?”
2) Inductive Invariants. An inductive invariant is a certificate that witnesses a certain quantitative reachability property. Quantitative (and qualitative) reachability are typically captured as least fixed points (cf. [7, 47, 52]). For upper bounds, this characterization makes it natural to search for a prefixed point – the inductive invariant – that, by standard fixed point theory [56], is greater than or equal to the least fixed point. Our invariants assign every state a quantity. If the initial state is assigned a quantity below the desired threshold, then the invariant certifies that the property in question holds. We detail quantitative inductive invariants in Sect. 2.2; we adapt our method to lower bound reasoning in Sect. 6.
3) SourceCode Level. We consider probabilistic programs over (potentially unbounded) integer variables that conceptually extend whileprograms with coin flips, see e.g. Fig. 2.^{Footnote 1} We exploit the program structure to reason about infinitestate (and large finitestate) programs: We never construct a Markov chain but find symbolic inductive invariants (mapping from program states to nonnegative reals) on sourcecode level. We particularly discover inductive invariants that are piecewise linear, as they can often be verified efficiently.
4) Inductive Synthesis. Our approach to finding invariants, as sketched in Fig. 1, is inspired by inductive synthesis [4]: The inner loop (shaded box) is provided with a template T which may generate an infinite set \(\langle T \rangle \) of instances. We then synthesize a template instance I that is an inductive invariant witnessing quantitative reachability, or determine that no such instance exists. We search for such instances in a counterexampleguided inductive synthesis (CEGIS) loop: The synthesizer constructs a candidate. (A tailored variant of) an offtheshelf verifier either (i) decides that the candidate is a suitable inductive invariant or (ii) reports a counterexample state \(s\) back to the synthesizer. Upon termination (guaranteed for finitestate programs), the inner loop has either found an inductive invariant or the solver reports that the template \(T\) does not admit an inductive invariant.
Contributions. We show that inductive synthesis for verifying quantitative reachability properties by finding inductive invariants on sourcecode level is feasible: Our approach is sound for arbitrary probabilistic programs, and complete for finitestate programs. We implemented our simple yet powerful technique. The results are promising: Our CEGIS loop is sufficiently fast to support large templates and finds inductive invariants for various probabilistic programs and properties. It can prove, amongst others, upper and lower bounds on reachability probabilities and universal positive almosttermination [42]. Our implementation is competitive with three stateoftheart tools – Storm [39], Absynth [50], and Exist [9] – on subsets of their benchmarks fitting our framework.
Applicability and Limitations. We consider programs with possibly unbounded nonnegative integervalued variables and arbitrary affine expressions in quantitative specifications. As for other synthesisbased approaches, there are unrealizable cases – loops for which no piecewise linear invariant exists. But, if there is an invariant, our CEGIS loop often finds it within a few iterations.
2 Overview
We illustrate our approach using the bounded retransmission protocol (BRP) – a standard probabilistic model checking benchmark [28, 38] – modeled by the probabilistic program in Fig. 2. The model attempts to transmit 8 million packets^{Footnote 2} over a lossy channel, where each packet is lost with probability \(0.1\%\); if a packet is lost, we retry sending it; if any packet is lost in 10 consecutive sending attempts (\(\textit{fail} = 10\)), the entire transmission fails; if all packets have been transmitted successfully (\(\textit{sent} = 8\,000\,000\)), the transmission succeeds.
2.1 Reachability Probabilities and Loops
We aim to reason about the transmissionfailure probability of BRP, i.e. the probability that the loop terminates in a target state \(t\) with \(t(\textit{fail}) = 10\) when started in initial program state \(s_0\) with \(s_0(\textit{fail}) = s_0(\textit{sent}) = 0\). One approach to determine this probability is to (i) construct an explicitstate Markov chain (MC) per Fig. 2, (ii) derive its Bellmann operator \(\varPhi \) [52], (iii) compute its least fixed point \(\textsf {{lfp}}~\varPhi \) (a vector containing for each state the probability to reach t), e.g. using value iteration (cf. [7, Thm 10.15]), and finally (iv) evaluate \(\textsf {{lfp}}~\varPhi \) at \(s_0\).
The explicitstate MC of BRP has ca. 80 million states. We avoid building such large state spaces by computing a symbolic representation of \(\varPhi \) from the program. More formally, let \(S\) be the set of all states, \(\texttt{loop}\) the entire loop (ll. 2–3 in Fig. 2), \(\texttt{body}\) the \(\texttt{loop}\)’s body (l. 3), and \(\llbracket {\texttt{body}} \rrbracket (s)(s')\) the probability of reaching state \(s'\) by executing \(\texttt{body}\) once on state \(s\). Then the least fixed point of the \(\texttt{loop}\)’s Bellmann operator \(\varPhi :\bigl (S\rightarrow \mathbb {R}_{\ge 0}^\infty \bigr ) \rightarrow \bigl (S\rightarrow \mathbb {R}_{\ge 0}^\infty \bigr )\), defined by
captures the transmissionfailure probability for the entire execution of \(\texttt{loop}\) and for any initial state, that is, \((\textsf {{lfp}}~\varPhi )(s)\) is the probability of terminating in a target state when executing \(\texttt{loop}\) on \(s\) (even if \(\texttt{loop}\) would not terminate almostsurely). Intuitively, \(\varPhi (I)(s)\) maps to 1 if \(\texttt{loop}\) has terminated meeting the target condition (transmission failure); and to 0 if \(\texttt{loop}\) has terminated otherwise (transmission success). If \(\texttt{loop}\) is still running (i.e. it has neither failed nor succeeded yet), then \(\varPhi (I)(s)\) maps to the expected value of I after executing \(\texttt{body}\) on state \(s\).
2.2 Quantitative Inductive Invariants
Reachability probabilities are generally not computable for infinitestate probabilistic programs [43]. Even for finitestate programs the statespace explosion may prevent us from computing reachability probabilities exactly. However, it often suffices to know that the reachability probability is bounded from above by some threshold \(\lambda \). For BRP, we hence aim to prove that \((\textsf {{lfp}}~\varPhi )(s_0) \le \lambda \).
We attack the above task by means of (quantitative) inductive invariants: a candidate for an inductive invariant is a mapping \(I:S\rightarrow \mathbb {R}_{\ge 0}^\infty \). Intuitively, such a candidate I is inductive if the following holds: when assuming that \(I(s)\) is (an overapproximation of) the probability to reach a target state upon termination of \(\texttt{loop}\) on s, then the probability to reach a target state after performing one more guarded loop iteration, i.e. executing \({\texttt {if}}\,\left( \, {\textit{sent} < {\ldots }} \,\right) \,\left\{ \, {{\texttt{body}}{\,;}~ {\texttt{loop}}} \,\right\} \) on \(s\), must be at most \(I(s)\). Formally, I is an inductive invariant^{Footnote 3} if
by Park induction [51]. Hence, \(I(s)\) bounds for each initial state \(s\) the exact reachability probability from above. If we are able to find an inductive I that is below \(\lambda \) for the initial state \(s_0\) with \(\textit{fail} = \textit{sent} = 0\), i.e. \(I(s_0) \le \lambda \), then we have indeed proven the upper bound \(\lambda \) on the transmissionfailure probability of our BRP model. In a nutshell, our goal can be phrased as follows:
2.3 Our CEGIS Framework for Synthesizing Inductive Invariants
While finding a safe inductive invariant I is challenging, checking whether a given candidate I is indeed inductive is easier: it is decidable for certain infinitestate programs (cf. [14, Sect. 7.2]), it may not require an explicit exploration of the whole state space, and it can be done efficiently for piecewise linear I. Hence, techniques that generate decent candidate expressions fast and then check their inductivity could enable the automatic verification of probabilistic programs with gigantic and even infinite state spaces.
In this paper, we test this hypothesis by developing the CEGIS framework depicted in Fig. 1 for incrementally synthesizing inductive invariants. A template generator generates parametrized templates for inductive invariants. The inner loop (shaded box in Fig. 1) then tries to solve for appropriate templateparameter instantiations. If it succeeds, an inductive invariant has been synthesized. Otherwise, the template provably cannot be instantiated into an inductive invariant. The inner loop then reports that back to the template generator (possibly with some hint on why it failed, see [12, Appx. D]) and asks for a refined template.
For our running example, we start with the template
where we use Iverson brackets for indicators, i.e. \(\left[ {\varphi } \right] (s) = 1\) if \(s \models \varphi \) and 0 otherwise. T contains two kinds of variables: integer program variables \(\textit{fail}, \textit{sent} \) and \(\mathbb {Q} \)valued parameters \(\alpha , \beta , \gamma \). While the template is nonlinear, substituting \(\alpha , \beta , \gamma \) with concrete values yields piecewise linear candidate invariants I. We ensure that those I are piecewise linear to render the repeated inductivity checks efficient. We construct only socalled natural templates T with \(\varPhi \) in mind, e.g. we want to construct only I such that \(I(s) = 1\) when \(s(\textit{fail}) = 10\).
Our inner CEGIS loop checks whether there exists an assignment from these template variables to concrete values such that the resulting piecewise linear expression is an inductive invariant. Concretely, we try to determine whether there exist values for \(\alpha , \beta , \gamma \) such that \(T(\alpha , \beta ,\gamma )\) is inductive. For that, we first guess values for \(\alpha , \beta , \gamma \), say all 0’s, and ask a verifier whether the instantiated (and now piecewise linear) template \(I = T(0,0,0)\) is indeed inductive. In our example, the verifier determines that I is not inductive: a counterexample is \(s(\textit{fail})=9\), \(s(\textit{sent})=7999999\). Intuitively, the probability to reach the target after one more loop iteration exceeds the value in I for this state, that is, \(\varPhi (I)(s) = 0.001 > 0 = I(s)\). From this counterexample, our synthesizer learns
Observe that this learned lemma is linear in \(\alpha , \beta , \gamma \). The synthesizer will now keep “guessing” assignments to the parameters which are consistent with the learned lemmas until either no such parameter assignment exists anymore, or until it produces an inductive invariant \(I = T(\ldots )\). In our running example, assuming \(\lambda = 0.9\), after 6 lemmas, our synthesizer finds the inductive invariant I
where indeed \(I(s_0) \le \lambda \) holds. For a tighter threshold \(\lambda \), such simple templates do not suffice. For example, it is impossible to instantiate this template to an inductive invariant for \(\lambda = 0.8\), even though 0.8 is an upper bound on the actual reachability probability. We therefore support more general templates of the form
where the \(B_i\) are (restricted) predicates over program and template variables which partition the state space. In particular, we allow for a template such as
However, such templates are challenging for the CEGIS loop. Thus, we additionally consider templates where the \(B_i\)’s range only over program variables, e.g.
Our partition refinement algorithms automatically produce these templates, without the need for user interaction.
Finally, we highlight that we may use our approach for more general questions. For BRP, suppose we want to verify an upper bound \(\lambda = 0.05\) on the probability of failing to transmit all packages for an infinite set of models (also called a family) with varying upper bounds on packets \(1 \le P \le 8000000\) and retransmissions \(R \ge 5\). This infinite set of models is described by the loop shown in Fig. 3a. Our approach fully automatically synthesizes the following inductive invariant \(I \):
The first summand of \(I \) is plotted in Fig. 3b. Since \(I \) overapproximates the probability of failing to transmit all packages for every state, \(I \) may be used to infer additional information about the reachability probabilities.
3 Formal Problem Statement
Before we state the precise invariant synthesis problem that we aim to solve, we summarize the essential concepts underlying our formalization.
Probabilistic Loops. We consider single probabilistic loops \({\texttt {while}}\left( \, {\varphi } \,\right) \left\{ \, {C} \,\right\} \) whose loop guard \(\varphi \) and (loopfree) body \(C\) adhere to the grammar
where \(z \in \mathbb {Z} \) is a constant and x is from an arbitrary finite set \(\textsf{Vars} \) of \(\mathbb {N} \)valued program variables. Program states in map variables to natural numbers.^{Footnote 4} All statements are standard (cf. [47]). \(\left\{ \, {C_1} \,\right\} \mathrel {\left[ \,p\,\right] }\left\{ \, {C_2} \,\right\} \) is a probabilistic choice which executes \(C_1\) with probability \(p \in [0,1] \cap \mathbb {Q} \) and \(C_2\) with probability \(1p\). Fig. 2 (ll. 2–3) is an example of a probabilistic loop.
Expectations. In Sect. 2, we considered whether final states meet some target condition by assigning 0 or 1 to each final state. The assignment can be generalized to more general quantities in \(\mathbb {R}_{\ge 0}^\infty \). We call such assignments f expectations [47] (think: random variable) and collect them in the set \(\mathbb {E}\), i.e.
\(\preceq \) is a partial order on \(\mathbb {E}\) – necessary to sensibly speak about least fixed points.
Characteristic Functions. The expected behavior of a probabilistic loop for an expectation f is captured by an expectation transformer (namely the \(\varPhi :\mathbb {E}\rightarrow \mathbb {E}\) of Sect. 2), called the loop’s characteristic function. To focus on invariant synthesis, we abstract from the details^{Footnote 5} of constructing characteristic functions from probabilistic loops; our framework only requires the following key property:
Proposition 1 (Characteristic Functions)
For every loop \({\texttt {while}}\left( \, {\varphi } \,\right) \left\{ \, {C} \,\right\} \) and expectation f, there exists a monotone function \(\varPhi _{f}:\mathbb {E}\rightarrow \mathbb {E}\) such that
and the least fixed point of \(\varPhi _{f}\), denoted \(\textsf {{lfp}}~\varPhi _{f}\), satisfies
Example 1
In our running example from Sect. 2.1, we chose as f the expression \(\left[ {\textit{fail} = 10} \right] \), which evaluates to 1 in every state s where \(\textit{fail} = 10\) and to 0 otherwise. The characteristic function \(\varPhi _{f}(I)\) of the loop in Fig. 2 is
,
where \(\varphi = \textit{sent}< 8\,000\,000 \wedge \textit{fail} < 10\) is the loop guard and denotes the (syntactic) substitution of variable x by expression \(e\) in expectation I – the latter is used to model the effect of assignments as in standard Hoare logic. \(\lhd \)
Inductive Invariants. For a probabilistic loop \({\texttt {while}}\left( \, {\varphi } \,\right) \left\{ \, {C} \,\right\} \), and pre and postexpectations \(g,f \in \mathbb {E}\), we aim to verify \(\textsf {{lfp}}~\varPhi _{f} \preceq g\), i.e. that the expected value of f after termination of the loop is bounded from above by g. We discuss how to adapt our approach to expected runtimes and lower bounds in Sect. 6. Intuitively, f assigns a quantity to all target states reached upon termination. g assigns to all initial states a desired bound on the expected value of f after termination of the loop. By choosing \(g(s) = \infty \) for certain s, we can make \(s\) sotospeak “irrelevant”. An \(I \in \mathbb {E}\) is an inductive invariant proving \(\textsf {{lfp}}~\varPhi _{f} \preceq g\) iff \(\varPhi _{f}(I) \preceq I\) and \(I \preceq g\). Continuing our example, Eq. (2) on p. 5 shows an inductive invariant proving that \( \textsf {{lfp}}~\varPhi _{f} \preceq g {:}{=}[\textit{fail} =0 \wedge \textit{sent} =0]\cdot 0.9 + [\lnot (\textit{fail} =0 \wedge \textit{sent} =0)]\cdot \infty \).
Our framework employs syntactic fragments of expectations on which the check \(\varPhi _{f}(I) \preceq I\) can be done symbolically by an SMT solver. As illustrated in Fig. 1, we use templates to further narrow down the invariant search space.
Templates. Let \(\textsf{TVars}= \{\alpha , \beta , \ldots \}\) be a countably infinite set of \(\mathbb {Q} \)valued template variables. A template valuation is a function \(\mathfrak {I}:\textsf{TVars}\rightarrow \mathbb {Q} \) that assigns to each template variable a rational number. We will use the same expressions as in our programs except that we admit both rationals and template variables as coefficients. Formally, arithmetic and Boolean expressions \(E\) and \(B\) adhere to
where \(x \in \textsf{Vars} \) and \(r \in \mathbb {Q} \cup \textsf{TVars}\). The set \(\textsf{TExp}\) of templates then consists of all
for \(n \ge 1\), where the Boolean expressions \(B_i\) partition the state space, i.e. for all template valuations \(\mathfrak {I}\) and all states \(s\), there is exactly one \(B_i\) such that \(\mathfrak {I}, s\models B_i\). \(T\) is a fixedpartition template if additionally no \(B_i\) contains a template variable.
Notice that templates are generally not linear (over \(\textsf{Vars} \cup \textsf{TVars}\)). Sect. 2 gives several examples of templates, e.g. Eq. (1).
Template Instances. We denote by \(T\left[ \mathfrak {I}\right] \) the instance of template \(T\) under \(\mathfrak {I}\), i.e. the expression obtained from substituting every template variable \(\alpha \) in \(T\) by its valuation \(\mathfrak {I}(\alpha )\). For example, the expression in Eq. (2) on p. 5 is an instance of the template in Eq. (1) on p. 5. The set of all instances of template \(T\) is defined as ]. We chose the shape of templates on purpose: To evaluate an instance \(T\left[ \mathfrak {I}\right] \) of a template \(T\) in a state \(s\), it suffices to find the unique Boolean expression \(B_i\) with \(\mathfrak {I}, s\models B_i\) and then evaluate the single linear arithmetic expression \(E_i\left[ \mathfrak {I}\right] \) in \(s\). For fixedpartition templates, the selection of the right \(B_i\) does not even depend on the template evaluation \(\mathfrak {I}\).
Piecewise Linear Expectations. Some template instances \(T\left[ \mathfrak {I}\right] \) do not represent expectations, i.e. they are not of type \(S\rightarrow \mathbb {R}_{\ge 0}^\infty \), as they may evaluate to negative numbers. Template instances \(T\left[ \mathfrak {I}\right] \) that do represent expectations are piecewise linear; we collect such welldefined instances in the set \(\textsf{LinExp} \). Formally,
Definition 1
(\(\boldsymbol{\textsf{LinExp}}\)). The set \(\textsf{LinExp} \) of (piecewise) linear expectations is \( \textsf{LinExp} ~{}={}~\{ T\left[ \mathfrak {I}\right] \mid T\in \textsf{TExp}~~\text {and}~~ \mathfrak {I}:\textsf{TVars}\rightarrow \mathbb {Q} ~~\text {and}~~ \forall s\in S:T\left[ \mathfrak {I}\right] (s) \ge 0 \} \).
We identify welldefined instances of templates in \(\textsf{LinExp} \) with the expectation in \(\mathbb {E}\) that they represent, e.g. when writing the inductivity check .
Natural Templates. As suggested in Sect. 2.3, it makes sense to focus only on socalled natural templates. Those are templates that even have a chance of becoming inductive, as they take the loop guard \(\varphi \) and postexpectation f into account. Formally, a template \(T\) is natural (wrt. to \(\varphi \) and f) if \(T\) is of the form
We collect all natural templates in the set \(\textsf{TnExp}\).
Formal Problem Statement. Throughout this paper, we fix an ambient single loop \({\texttt {while}}\left( \, {\varphi } \,\right) \left\{ \, {C} \,\right\} \), a postexpectation \(f \in \textsf{LinExp} \), and a preexpectation \(g \in \textsf{LinExp} \)^{Footnote 6} such that \(\textsf {{lfp}}~\varPhi _{f}(I) \preceq g\)^{Footnote 7}. The set \(\textsf{AdmInv}\) of admissible invariants (i.e. those expectations that are both inductive and safe) is then given by
where the underbraces summarize the tasks for a verifier to decide whether a template instance I is an admissible inductive invariant. We require \(\textsf {{lfp}}~\varPhi _{f} \preceq g\), so that \(\textsf{AdmInv}\) is not vacuously empty due to an unsafe bound g.
Notice that \(\textsf{AdmInv}\) might be empty, even for safe g’s, because generally one might need more complex invariants than piecewise linear ones [16]. However, there always exists an inductive invariant in \(\textsf{LinExp} \) if a loop can reach only finitely many states.^{Footnote 8} We call a loop \({\texttt {while}}\left( \, {\varphi } \,\right) \left\{ \, {C} \,\right\} \) finitestate, if only finitely many states satisfy the loop guard \(\varphi \), i.e. if is finite.
Syntactic Characteristic Functions. We work with linear expectations \(I,f \in \textsf{LinExp} \), so that we can check inductivity (\(\varPhi _{f}(I) \preceq I\)) symbolically (via SMT) without state space construction. In particular, we can construct a syntactic counterpart \(\varPsi _{f}\) to \(\varPhi _{f}\) that operates on templates. Intuitively, whether we evaluate \(\varPsi _{f}\) on a (syntactic) template \(T\) and then instantiate the result with a valuation \(\mathfrak {I}\), or we evaluate \(\varPhi _{f}\) on the (semantic) expectation \(T\left[ \mathfrak {I}\right] \) emerging from instantiating \(T\) with \(\mathfrak {I}\) – the results will coincide if \(T\left[ \mathfrak {I}\right] \) is welldefined. Formally:
Proposition 2
Given \({\texttt {while}}\left( \, {\varphi } \,\right) \left\{ \, {C} \,\right\} \) and \(f \in \textsf{LinExp} \), one can effectively compute a mapping \(\varPsi _{f} :\textsf{TExp}\rightarrow \textsf{TExp}\), such that for all \(T\) and \(\mathfrak {I}\)
Moreover, \(\varPsi _{f}\) maps fixedpartition templates to fixedpartition templates.
In Ex. 1, we have already constructed such a \(\varPsi _{f}\) to represent \(\varPhi _{f}\). The general construction is inspired by [14], but treats template variables as constants.
4 OneShot Solver
One could address the template instantiation problem from Sect. 3 in one shot: encode it as an SMT query, ask a solver for a model, and infer from the model an admissible invariant. While this approach is infeasible in practice (as it involves quantification over \(S_\varphi \)), it inspires the CEGIS loop in Fig. 1.
Regarding the encoding, given a template \(T\), we need a formula over \(\textsf{TVars}\) that is satisfiable if and only if there exists a template valuation \(\mathfrak {I}\) such that \(T\left[ \mathfrak {I}\right] \) is an admissible invariant, i.e. \(T\left[ \mathfrak {I}\right] \in \textsf{AdmInv}\). To get rid of program variables in templates, we denote by \(T(s)\) the expression over \(\textsf{TVars}\) in which all program variables \(x \in \textsf{Vars} \) have been substituted by \(s(x)\).
Intuitively, we then encode that, for every state \(s\), the expression \(T(s)\) satisfies the three conditions of admissible invariants, i.e. welldefinedness, inductivity, and safety. In particular, we use Prop. 2 to compute a template \(\varPsi _{f}(T)\) that represents the application of the characteristic function \(\varPhi _{f}\) to a candidate invariant, i.e. \(\varPhi _{f}(T\left[ \mathfrak {I}\right] )\) – a necessity for encoding inductivity.
Formally, we denote by \(\textsf{Sat}(\phi )\) the set of all models of a firstorder formula \(\phi \) (with a fixed underlying structure), i.e. \(\textsf{Sat}(\phi ) = \{ \mathfrak {I}\mid \mathfrak {I}\models \phi \}\). Then:
Theorem 1
For every natural template \(T\in \textsf{TnExp}\) and \(f, g \in \textsf{LinExp} \), we have
Notice that, for fixedpartition templates, the above encoding is particularly simple: \(T(s)\) and \(\varPsi _{f}(T)(s)\) are equivalent to single linear arithmetic expressions over \(\textsf{TVars}\); \(g(s)\) is either a single expression or \(\infty \) – in the latter case, we get an equisatisfiable formula by dropping the alwayssatisfied constraint \(T(s) \le g(s)\).
For general templates, one can exploit the partitioning to break it down into multiple inequalities, i.e. every inequality becomes a conjunction over implications of linear inequalities over the template variables \(\textsf{TVars}\).
Example 2
Reconsider template \(T\) in Eq. (3) on p. 6 and assume a state \(s\) with \(s(\textit{fail}) = 5\) and \(s(\textit{sent}) = 2\). Then, we encode the welldefinedness, \(T(s) \ge 0\), as
where the trivially satisfiable conjunct \(5 = 10 \Rightarrow \textsf{true}\) encoding the last summand, i.e. \(\left[ {\textit{fail} = 10} \right] \), has been dropped. \(\lhd \)
The query in Thm. 1 involves (nonlinear) mixed real and integer arithmetic with quantifiers – a theory that is undecidable in general. However, for finitestate loops and natural templates, one can replace the universal quantifier \(\forall s\) by a finite conjunction \(\bigwedge _{s\in S_\varphi }\) to obtain a (decidable) QF_LRA formula.
Theorem 2
The problem is decidable for finitestate loops and \(T\in \textsf{TnExp}\). If \(T\) is fixedpartition, it is decidable via linear programming.
5 Constructing an Efficient CEGIS Loop
We now present a CEGIS loop (see inner loop of Fig. 1) in which a synthesizer and a verifier attempt to incrementally solve our problem statement (cf. p. 9).
5.1 The Verifier
We assume a verifier for checking . For CEGIS, it is important to get some feedback whenever \(I \not \in \textsf{AdmInv}\). To this end, we define:
Definition 2
For a state \(s\in S\), the set \(\textsf{AdmInv}(s)\) of \(s\)admissible invariants is
For a subset \(S' \subseteq S\) of states, we define \(\textsf{AdmInv}(S') = \bigcap _{s \in S'} \textsf{AdmInv}(s)\).
Clearly, if \(I \not \in \textsf{AdmInv}\), then \(I \notin \textsf{AdmInv}(s)\) for some \(s \in S\), i.e. state s is a counterexample to welldefinedness, inductivity, or safety of I. We denote the set of all such counterexamples (to the claim \(I \in \textsf{AdmInv}\)) by \(\textsf{CounterEx}_I \). We assume an effective (baseline) verifier for detecting counterexamples:
Definition 3
A verifier is any function \(\textsf{Verify}:\textsf{LinExp} \rightarrow \{\textsf{true}\} \cup S\) such that

1.
\(\textsf{Verify}(I )=\textsf{true}\) if and only if \(I \in \textsf{AdmInv}\), and

2.
\(\textsf{Verify}(I )=s\) implies \(s \in \textsf{CounterEx}_I \).
Proposition 3
([14]). There exist effective verifiers.
For example, one can implement an SMTbacked verifier using an encoding analogous to Thm. 1, where every model is a counterexample \(s \in \textsf{CounterEx}_I \):
5.2 The CounterexampleGuided Inductive Synthesizer
A synthesizer must generate from a given template \(T\) instances \(I \in \langle T \rangle \) which can be passed to a verifier for checking admissibility. To make an informed guess, our synthesizers can take a finite set of witnesses \(S' \subseteq S\) into account:
Definition 4
Let \(\textsf{FinStates}\) be the set of finite sets of states. A synthesizer for template \(T\in \textsf{TnExp}\) is any function \(\textsf{Synt}_T:\textsf{FinStates} \rightarrow \langle T \rangle \cup \{ \textsf{false}\} \) such that

1.
if \(\textsf{Synt}_T(S') = I\), then \(I \in \langle T \rangle \cap \textsf{AdmInv}(S')\), and

2.
\(\textsf{Synt}_T(S') = \textsf{false}\) if and only if \(\langle T \rangle \cap \textsf{AdmInv}(S') = \emptyset \).
To build a synthesizer \(\textsf{Synt}_T(S')\) for finite sets of states \(S' \subseteq S\), we proceed analogously to oneshot solving for finitestate loops (Thm. 2), i.e. we exploit
That is, our synthesizer may return any model \(\mathfrak {I}\) of the above constraint system; it can be implemented as one SMT query. In particular, one can efficiently find such an \(\mathfrak {I}\) for fixedpartition templates via linear programming.
Theorem 3 (Synthesizer Completeness)
For finitestate loops and natural templates \(T\in \textsf{TnExp}\), we have or \(\langle T \rangle \cap \textsf{AdmInv}= \emptyset \).
Using the synthesizer and verifier in concert is then intuitive as in Alg. 1. We incrementally ask our synthesizer to provide a candidate invariant I that is sadmissible for all states \(s \in S'\). Unless the synthesizer returns \(\textsf{false}\), we ask the verifier whether I is admissible. If yes, we return I; otherwise, we get a counterexample s and add it to \(S'\) before synthesizing the next candidate.
Remark 1
Without further restrictions, the verifier of Def. 3 may go into a counterexample enumeration spiral. In [12, Appx. C], we therefore discuss additional constraints that make this verifier act more cooperatively. \(\lhd \)
6 Generalization to Termination and Lower Bounds
We extend our approach to (i) proving universal positive almostsure termination (UPAST) – termination in finite expected runtime on all inputs, see [42, Sect. 6] – by synthesizing piecewise linear upper bounds on expected runtimes, and to (ii) verifying lower bounds on possibly unbounded expected values.
UPAST. We leverage Kaminski et al.’s weakestpreconditionstyle calculus for reasoning about expected runtimes [44, 45]:
Proposition 4
For every loop \({\texttt {while}}\left( \, {\varphi } \,\right) \left\{ \, {C} \,\right\} \), the monotone function
obtained from \(\varPhi _{0}\) (cf. Prob. 1) satisfies
All properties of \(\varPhi _{0}\) relevant to our approach carry over to \(\varTheta \), thus enabling the synthesis of inductive invariants \(I \in \textsf{LinExp} \) satisfying \(0 \preceq I \) and \(\varTheta (I ) \preceq I \). Such \(I \) upperbound the expected number of loop iterations [44] and, since expectations in \(\textsf{LinExp} \) never evaluate to infinity, \(I \) witnesses UPAST of the \({\texttt {while}}\)loop.
Lower Bounds. Consider the problem of verifying a lower bound \(g \preceq \textsf {{lfp}}~\varPhi _{f}\) for some loop \(C' = {\texttt {while}}\left( \, {\varphi } \,\right) \left\{ \, {C} \,\right\} \). It is straightforward to modify our CEGIS approach for synthesizing subinvariants, i.e. \(I \in \textsf{LinExp} \) with \(I \preceq \varPhi _{f}(I )\). However, Hark et al. [36] showed that subinvariants do not necessarily lowerbound \(\textsf {{lfp}}~\varPhi _{f}\); they hence proposed a more involved yet sound induction rule for lower bounds:
Theorem 4
(Adapted from Hark et al. [36]). Let \(T\) be a natural template and \(I \in \langle T \rangle \). If \(0\preceq I \), \(I \preceq \varPhi _{f}(I )\), and \(C'\) is UPAST, then
Akin to Prob. 2, given \(T\in \textsf{TnExp}\), we can compute \(T' \in \textsf{TnExp}\) s.t. for all \(\mathfrak {I}\),
which facilitates the extension of our verifier and synthesizer (see Sect. 5) for encoding and checking conditional difference boundedness. Hence, we can employ our CEGIS framework for verifying \(g \preceq \textsf {{lfp}}~\varPhi _{f}\) by (i) proving UPAST of \(C'\) as demonstrated above and (ii) synthesizing a c.d.b. subinvariant \(I \) with \(g \preceq I \).
7 Empirical Evaluation
We have implemented a prototype of our techniques called cegispro2^{Footnote 9}: CEGIS for PRObabilistic PROgrams. The tool is written in Python using pySMT [34] with Z3 [49] as the backend for SMT solving. cegispro2 proves upper or lower bounds on expected outcomes of a probabilistic program by synthesizing quantitative inductive invariants. We investigate the applicability and scalability of our approach with a focus on the expressiveness of piecewise linear invariants. Moreover, we compare with three stateoftheart tools – Storm [39], Absynth [50], and Exist [9] – on subsets of their benchmarks fitting into our framework.
Template Refinement. We start with a fixedpartition template \(T_1\) constructed automatically from the syntactic structure of the given loop (i.e. the loop guard and branches in the loop body, see e.g. Eq. (1)). If we learn that \(T_1\) admits no admissible invariant, we generate a refined template \(T_2\), and so on, until we find a template \(T_{i}\) with \(\langle T_i \rangle \cap \textsf{AdmInv}\ne \emptyset \) or realize that no further refinement is possible. We implemented three strategies for template refinement (including one producing nonfixedpartition templates); see [12, Appx. D] for details.
FiniteState Programs. Fig. 4a depicts experiments on verifying upper bounds on expected values of finitestate programs. For each benchmark, i.e. program and property with increasingly sharper bounds, we evaluate cegispro2 on all templaterefinement strategies (cf. [12, Appx. D]). We compare explicit and symbolicstate engines of the probabilistic model checker Storm 1.6.3 [39] with exact arithmetic. Storm implements LPbased model checking (as in Sect. 4) but employs more efficient methods in its default configuration. Fig. 4a depicts the runtime of the best configuration. See detailed configurations in [12, Appx. E.1].
Results. (i) Our CEGIS approach synthesizes inductive invariants for a variety of programs. We mostly find syntactically small invariants with a small number of counterexamples compared to the statespace size (cf. [12, Tab. 2]). This indicates that piecewise linear inductive invariants can be sufficiently expressive for the verification of finitestate programs. The overall performance of cegispro2 depends highly on the sharpness of the given thresholds. (ii) Our approach can outperform stateoftheart explicit and symbolicstate model checking techniques and can scale to huge state spaces. There are also simple programs where our method fails to find an inductive invariant (gridbig) or finds inductive invariants only for rather simple properties while requiring many counterexamples (gridsmall). Whether we need more sophisticated template refinements or whether these programs are not amenable to piecewise linear expectations is left for future work. (iii) There is no clear winner between the two fixedpartition templaterefinement strategies (cf. [12, Tab. 2]). We further observe that the nonfixedpartition refinement is not competitive as significantly more time is spent in the synthesizer to solve formulae with Boolean structures. We thus conclude that searching for good fixedpartition templates in a separate outer loop (cf. Fig. 1) pays off.
Proving UPAST. Fig. 4b depicts experiments on proving UPAST of (possibly infinitestate) programs taken from [50] (restricted to \(\mathbb {N} \)valued, linear programs with flattened nested loops). We compare to the LPbased tool Absynth [50] for computing upper bounds on expected runtimes. These benchmarks do not require template refinements. More details are given in [12, Appx. E.2].
Results. cegispro2 can prove UPAST of various infnitestate programs from the literature using very few counterexamples. Absynth mostly outperforms cegispro2^{Footnote 10}, which is to be expected as Absynth is tailored to the computation of expected runtimes. Remarkably, the runtime bounds synthesized by cegispro2 are often as tight as the bounds synthesized by Absynth (cf. [12, Tab. 3]).
Verifying Lower Bounds. Fig. 4c depicts experiments aiming to verify lower bounds on expected values of (possibly infinitestate) programs taken from [9]. We compare to Exist [9]^{Footnote 11}, which combines CEGIS with sampling and MLbased techniques. However, Exist synthesizes subinvariants only, which might be unsound for proving lower bounds (cf. Sect. 6). Thus, for a fair comparison, Fig. 4c depicts experiments where both Exist and cegispro2 synthesize subinvariants only, whereas in Fig. 4d, we compare cegispro2 that finds subinvariants only with cegispro2 that additionally proves UPAST and c.d.b., thus obtaining sound lower bounds as per Thm. 4. No benchmark requires template refinements.
Results. cegispro2 is capable of verifying quantitative lower bounds and outperforms Exist (on 30/32 benchmarks) for synthesizing subinvariants. Additionally proving UPAST and c.d.b. naturally requires more time. A manual inspection reveals that, for most TO/MO cases in Fig. 4d, there is no c.d.b. subinvariant. One soundness check times out, since we could not prove UPAST for that benchmark.
8 Related Work
We discuss related works in invariant synthesis, probabilistic model checking, and symbolic inference. ICE [33] is a templatebased, cex.guided technique for learning invariants. More inductive synthesis approaches are surveyed in [4, 29].
Quantitative Invariant Synthesis. Apart from the discussed method [9], constraint solvingbased approaches [26, 30, 46] aim to synthesize quantitative invariants for proving lower bounds over \(\mathbb {R}\)valued program variables – arguably a simplification as it allows solvers to use (decidable) real arithmetic. In particular, [26] also obtains linear constraints from counterexamples ensuring certain validity conditions on candidate invariants. Apart from various technical differences, we identify three conceptual differences: (i) we support piecewise expectations which have been shown sufficiently expressive for verifying quantitative reachability properties; (ii) we focus on the integration of fast verifiers over efficiently decidable theories; and (iii) we do not need to assume termination or boundedness of expectations.
Various martingalebased approaches, such as [2, 19, 23, 24, 31, 32, 48], aim to synthesize quantitative invariants over \(\mathbb {R}\)valued variables, see [55] for a recent survey. Most of these approaches yield invariants for proving almostsure termination or bounding expected runtimes. \(\varepsilon \)decreasing supermartingales [19, 20] and nonnegative repulsing supermartingales [55] can upperbound arbitrary reachability probabilities. In contrast, we synthesize invariants for proving upper or lower bounds for more general quantities, i.e. expectations. [10] can prove bounds on expected values via symbolic reasoning and Doob’s decomposition, which, however, requires usersupplied invariants and hints. [1] employs a CEGIS loop to train a neural network dedicated to learning a ranking supermartingale witnessing UPAST of (possibly continuous) probabilistic programs. They also use counterexamples provided by SMT solvers to guide the learning process.
The recurrence solvingbased approach in [11] synthesizes nonlinear invariants encoding (higherorder) moments of program variables. However, the underlying algebraic techniques are confined to the subclass of probsolvable loops.
Probabilistic Model Checking. Symbolic probabilistic model checking focusses mostly on algebraic decision diagrams [3, 6], representing the transition relation symbolically and using equation solving or value iteration [8, 37, 53] on that representation. PrIC3 [15] finds quantitative invariants by iteratively overapproximating kstep reachability. Alternative CEGIS approaches synthesize Markov chains [18] and probabilistic programs [5] that satisfy reachability properties.
Symbolic Inference. Probabilistic inference – in the finitehorizon case – employs weighted model counting via either decision diagrams annotated with probabilities as in Dice [40, 41] or approximate versions by SAT/SMTsolvers [17, 21, 22, 27, 54]. PSI [35] determines symbolic representations of exact distributions. Prodigy [25] decides whether a probabilistic loop agrees with an (invariant) specification.
Notes
 1.
Prism programs can be interpreted as an implicit while(not errorstate) \(\{ \ldots \}\) program – see [40] for an explicit translation.
 2.
Large constants like the number of packets appear naturally in quantitative models of protocols and have a nontrivial impact on probabilities.
 3.
For an exposition of why it makes sense to speak of invariants even in a quantitative setting, [42, Sect. 5.1] relates quantitative invariants to invariants in Hoare logic.
 4.
Considering only unsigned integers does not decrease expressive power but simplifies the technical presentation (cf. [16, Sect. 11.2] for a detailed discussion). We statically ensure that for every assignment \(x \mathrel {\mathtt {{:}{=}}}e\), e always evaluates to some value in \(\mathbb {N} \).
 5.
We can (and our tool does) derive a symbolic representation of a loop’s characteristic function from the program structure using a weakestpreconditionstyle calculus (cf. [47]); see [12, Appx. A] for details. If f maps only to 0 or 1, \(\varPhi _{f}\) corresponds to the least fixed point characterization of reachability probabilities [7, Thm. 10.15].
 6.
To enable declaring certain states as irrelevant, we additionally allow \(E_i = \infty \) in the linear preexpectation \(g = \left[ {B_1} \right] \cdot E_1 + \ldots + \left[ {B_n} \right] \cdot E_n\).
 7.
We discuss in Sect. 6 how to reason about lower bounds \(g \preceq \textsf {{lfp}}~\varPhi _{f}(I)\).
 8.
Bluntly just choose as many pieces as there are states.
 9.
 10.
Absynth uses floatingpoint arithmetic whereas cegispro2 uses exact arithmetic.
 11.
Exist supports parametric probabilities, which are not supported by our tool. We have instantiated these parameters with varying probabilities to enable a comparison.
References
Abate, A., Giacobbe, M., Roy, D.: Learning probabilistic termination proofs. In: CAV (2). Lecture Notes in Computer Science, vol. 12760, pp. 3–26. Springer (2021)
Agrawal, S., Chatterjee, K., Novotný, P.: Lexicographic ranking supermartingales. PACMPL 2(POPL), 34:1–34:32 (2018)
de Alfaro, L., Kwiatkowska, M.Z., Norman, G., Parker, D., Segala, R.: Symbolic model checking of probabilistic processes using MTBDDs and the Kronecker representation. In: TACAS. Lecture Notes in Computer Science, vol. 1785, pp. 395–410. Springer (2000)
Alur, R., Bodík, R., Dallal, E., Fisman, D., Garg, P., Juniwal, G., KressGazit, H., Madhusudan, P., Martin, M.M.K., Raghothaman, M., Saha, S., Seshia, S.A., Singh, R., SolarLezama, A., Torlak, E., Udupa, A.: Syntaxguided synthesis. In: Dependable Software Systems Engineering, vol. 40, pp. 1–25. IOS Press (2015)
Andriushchenko, R., Ceska, M., Junges, S., Katoen, J.: Inductive synthesis for probabilistic programs reaches new horizons. In: TACAS (1). Lecture Notes in Computer Science, vol. 12651, pp. 191–209. Springer (2021)
Baier, C., Clarke, E.M., HartonasGarmhausen, V., Kwiatkowska, M.Z., Ryan, M.: Symbolic model checking for probabilistic processes. In: ICALP. Lecture Notes in Computer Science, vol. 1256, pp. 430–440. Springer (1997)
Baier, C., Katoen, J.: Principles of Model Checking. MIT Press (2008)
Baier, C., Klein, J., Leuschner, L., Parker, D., Wunderlich, S.: Ensuring the reliability of your model checker: Interval iteration for Markov decision processes. In: CAV (1). Lecture Notes in Computer Science, vol. 10426, pp. 160–180. Springer (2017)
Bao, J., Trivedi, N., Pathak, D., Hsu, J., Roy, S.: Datadriven invariant learning for probabilistic programs. In: CAV (1). Lecture Notes in Computer Science, vol. 13371, pp. 33–54. Springer (2022)
Barthe, G., Espitau, T., Fioriti, L.M.F., Hsu, J.: Synthesizing probabilistic invariants via Doob’s decomposition. In: CAV (1). Lecture Notes in Computer Science, vol. 9779, pp. 43–61. Springer (2016)
Bartocci, E., Kovács, L., Stankovic, M.: Automatic generation of momentbased invariants for probsolvable loops. In: ATVA. Lecture Notes in Computer Science, vol. 11781, pp. 255–276. Springer (2019)
Batz, K., Chen, M., Junges, S., Kaminski, B.L., Katoen, J., Matheja, C.: Probabilistic program verification via inductive synthesis of inductive invariants. CoRR abs/2205.06152 (2022)
Batz, K., Chen, M., Junges, S., Kaminski, B.L., Katoen, J., Matheja, C.: cegispro2: Artifact for paper "probabilistic program verification via inductive synthesis of inductive invariants" (2023). https://doi.org/10.5281/zenodo.7507921
Batz, K., Chen, M., Kaminski, B.L., Katoen, J., Matheja, C., Schröer, P.: Latticed \(k\)induction with an application to probabilistic programs. In: CAV (2). Lecture Notes in Computer Science, vol. 12760, pp. 524–549. Springer (2021)
Batz, K., Junges, S., Kaminski, B.L., Katoen, J., Matheja, C., Schröer, P.: PrIC3: Property directed reachability for MDPs. In: CAV (2). Lecture Notes in Computer Science, vol. 12225, pp. 512–538. Springer (2020)
Batz, K., Kaminski, B.L., Katoen, J., Matheja, C.: Relatively complete verification of probabilistic programs: An expressive language for expectationbased reasoning. Proc. ACM Program. Lang. 5(POPL), 1–30 (2021)
Belle, V., Passerini, A., van den Broeck, G.: Probabilistic inference in hybrid domains by weighted model integration. In: IJCAI. pp. 2770–2776. AAAI Press (2015)
Ceska, M., Hensel, C., Junges, S., Katoen, J.: Counterexampleguided inductive synthesis for probabilistic systems. Formal Aspects Comput. 33(45), 637–667 (2021)
Chakarov, A., Sankaranarayanan, S.: Probabilistic program analysis with martingales. In: CAV. Lecture Notes in Computer Science, vol. 8044, pp. 511–526. Springer (2013)
Chakarov, A., Voronin, Y., Sankaranarayanan, S.: Deductive proofs of almost sure persistence and recurrence properties. In: TACAS. Lecture Notes in Computer Science, vol. 9636, pp. 260–279. Springer (2016)
Chakraborty, S., Fried, D., Meel, K.S., Vardi, M.Y.: From weighted to unweighted model counting. In: IJCAI. pp. 689–695. AAAI Press (2015)
Chakraborty, S., Meel, K.S., Mistry, R., Vardi, M.Y.: Approximate probabilistic inference via wordlevel counting. In: AAAI. pp. 3218–3224. AAAI Press (2016)
Chatterjee, K., Fu, H., Goharshady, A.K.: Termination analysis of probabilistic programs through Positivstellensatz’s. In: CAV (1). Lecture Notes in Computer Science, vol. 9779, pp. 3–22. Springer (2016)
Chatterjee, K., Novotný, P., Zikelic, D.: Stochastic invariants for probabilistic termination. In: POPL. pp. 145–160. ACM (2017)
Chen, M., Katoen, J., Klinkenberg, L., Winkler, T.: Does a program yield the right distribution? Verifying probabilistic programs via generating functions. In: CAV (1). Lecture Notes in Computer Science, vol. 13371, pp. 79–101. Springer (2022)
Chen, Y., Hong, C., Wang, B., Zhang, L.: Counterexampleguided polynomial loop invariant generation by Lagrange interpolation. In: CAV (1). Lecture Notes in Computer Science, vol. 9206, pp. 658–674. Springer (2015)
Chistikov, D., Dimitrova, R., Majumdar, R.: Approximate counting in SMT and value estimation for probabilistic programs. Acta Informatica 54(8), 729–764 (2017)
D’Argenio, P.R., Jeannet, B., Jensen, H.E., Larsen, K.G.: Reachability analysis of probabilistic systems by successive refinements. In: PAPMPROBMIV. Lecture Notes in Computer Science, vol. 2165, pp. 39–56. Springer (2001)
Fedyukovich, G., Bodík, R.: Accelerating syntaxguided invariant synthesis. In: TACAS (1). Lecture Notes in Computer Science, vol. 10805, pp. 251–269. Springer (2018)
Feng, Y., Zhang, L., Jansen, D.N., Zhan, N., Xia, B.: Finding polynomial loop invariants for probabilistic programs. In: ATVA. Lecture Notes in Computer Science, vol. 10482, pp. 400–416. Springer (2017)
Fioriti, L.M.F., Hermanns, H.: Probabilistic termination: Soundness, completeness, and compositionality. In: POPL. pp. 489–501. ACM (2015)
Fu, H., Chatterjee, K.: Termination of nondeterministic probabilistic programs. In: VMCAI. Lecture Notes in Computer Science, vol. 11388, pp. 468–490. Springer (2019)
Garg, P., Löding, C., Madhusudan, P., Neider, D.: ICE: A robust framework for learning invariants. In: CAV. Lecture Notes in Computer Science, vol. 8559, pp. 69–87. Springer (2014)
Gario, M., Micheli, A.: PySMT: A solveragnostic library for fast prototyping of SMTbased algorithms. In: SMT Workshop (2015)
Gehr, T., Misailovic, S., Vechev, M.T.: PSI: Exact symbolic inference for probabilistic programs. In: CAV (1). Lecture Notes in Computer Science, vol. 9779, pp. 62–83. Springer (2016)
Hark, M., Kaminski, B.L., Giesl, J., Katoen, J.: Aiming low is harder: Induction for lower bounds in probabilistic program verification. Proc. ACM Program. Lang. 4(POPL), 37:1–37:28 (2020)
Hartmanns, A., Kaminski, B.L.: Optimistic value iteration. In: CAV (2). Lecture Notes in Computer Science, vol. 12225, pp. 488–511. Springer (2020)
Helmink, L., Sellink, M.P.A., Vaandrager, F.W.: Proofchecking a data link protocol. In: TYPES. Lecture Notes in Computer Science, vol. 806, pp. 127–165. Springer (1993)
Hensel, C., Junges, S., Katoen, J., Quatmann, T., Volk, M.: The probabilistic model checker Storm. Int. J. Softw. Tools Technol. Transf. 24(4), 589–610 (2022)
Holtzen, S., Junges, S., VazquezChanlatte, M., Millstein, T.D., Seshia, S.A., van den Broeck, G.: Model checking finitehorizon Markov chains with probabilistic inference. In: CAV (2). Lecture Notes in Computer Science, vol. 12760, pp. 577–601. Springer (2021)
Holtzen, S., van den Broeck, G., Millstein, T.D.: Scaling exact inference for discrete probabilistic programs. Proc. ACM Program. Lang. 4(OOPSLA), 140:1–140:31 (2020)
Kaminski, B.L.: Advanced Weakest Precondition Calculi for Probabilistic Programs. Ph.D. thesis, RWTH Aachen University, Germany (2019)
Kaminski, B.L., Katoen, J., Matheja, C.: On the hardness of analyzing probabilistic programs. Acta Inform. 56(3), 255–285 (2019)
Kaminski, B.L., Katoen, J., Matheja, C., Olmedo, F.: Weakest precondition reasoning for expected runtimes of probabilistic programs. In: ESOP. Lecture Notes in Computer Science, vol. 9632, pp. 364–389. Springer (2016)
Kaminski, B.L., Katoen, J., Matheja, C., Olmedo, F.: Weakest precondition reasoning for expected runtimes of randomized algorithms. J. ACM 65(5), 30:1–30:68 (2018)
Katoen, J., McIver, A., Meinicke, L., Morgan, C.: Linearinvariant generation for probabilistic programs: Automated support for proofbased methods. In: SAS. Lecture Notes in Computer Science, vol. 6337, pp. 390–406. Springer (2010)
McIver, A., Morgan, C.: Abstraction, Refinement and Proof for Probabilistic Systems. Monographs in Computer Science, Springer (2005)
Moosbrugger, M., Bartocci, E., Katoen, J., Kovács, L.: Automated termination analysis of polynomial probabilistic programs. In: ESOP. Lecture Notes in Computer Science, vol. 12648, pp. 491–518. Springer (2021)
de Moura, L.M., Bjørner, N.S.: Z3: An efficient SMT solver. In: TACAS. Lecture Notes in Computer Science, vol. 4963, pp. 337–340. Springer (2008)
Ngo, V.C., Carbonneaux, Q., Hoffmann, J.: Bounded expectations: Resource analysis for probabilistic programs. In: PLDI. pp. 496–512. ACM (2018)
Park, D.: Fixpoint induction and proofs of program properties. Mach. Intell. 5 (1969)
Puterman, M.L.: Markov Decision Processes. Wiley Series in Probability and Statistics, Wiley (1994)
Quatmann, T., Katoen, J.: Sound value iteration. In: CAV (1). Lecture Notes in Computer Science, vol. 10981, pp. 643–661. Springer (2018)
Rabe, M.N., Wintersteiger, C.M., Kugler, H., Yordanov, B., Hamadi, Y.: Symbolic approximation of the bounded reachability probability in large Markov chains. In: QEST. Lecture Notes in Computer Science, vol. 8657, pp. 388–403. Springer (2014)
Takisaka, T., Oyabu, Y., Urabe, N., Hasuo, I.: Ranking and repulsing supermartingales for reachability in randomized programs. ACM Trans. Program. Lang. Syst. 43(2), 5:1–5:46 (2021)
Tarski, A.: A latticetheoretical fixpoint theorem and its applications. Pacific J. Math. 5(2), 285–309 (1955)
DataAvailability Statement
The datasets generated during and/or analysed during the current study are available in the Zenodo repository [13].
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this paper
Cite this paper
Batz, K., Chen, M., Junges, S., Kaminski, B.L., Katoen, JP., Matheja, C. (2023). Probabilistic Program Verification via Inductive Synthesis of Inductive Invariants. In: Sankaranarayanan, S., Sharygina, N. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2023. Lecture Notes in Computer Science, vol 13994. Springer, Cham. https://doi.org/10.1007/9783031308208_25
Download citation
DOI: https://doi.org/10.1007/9783031308208_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783031308192
Online ISBN: 9783031308208
eBook Packages: Computer ScienceComputer Science (R0)