Keywords

figure a
figure b

1 Introduction

Property directed reachability (PDR) (also called IC3) introduced in [9, 13] is a model checking algorithm for proving/disproving safety problems. It has been successfully applied to software and hardware model checking, and later it has been extended in several directions, including fbPDR [25, 26] that uses both forward and backward predicate transformers and PrIC3 [6] for the quantitative safety problem for probabilistic systems. See [14] for a concise overview.

The original PDR assumes that systems are given by binary predicates representing transition relations. The PDR algorithm maintains data structures called frames and proof obligations—these are collections of predicates over states—and updates them. While this logic-based description immediately yields automated tools using SAT/SMT solvers, it limits target systems to qualitative and nondeterministic ones. This limitation was first overcome by PrIC3 [6] whose target is probabilistic systems. This suggests room for further generalization of PDR.

In this paper, we propose the first lattice theory-based generalization of the PDR algorithm; we call it LT-PDR. This makes the PDR algorithm apply to a wider class of safety problems, including qualitative and quantitative. We also derive a new concrete extension of PDR, namely one for Markov reward models.

We implemented the general algorithm LT-PDR in Haskell, in a way that maintains the theoretical abstraction and clarity. Deriving concrete instances for various types of systems is easy (for Kripke structures, probabilistic systems, etc.). We conducted an experimental evaluation, which shows that these easily-obtained instances have at least reasonable performance.

Preview of the Theoretical Contribution. We generalize the PDR algorithm so that it operates over an arbitrary complete lattice L. This generalization recasts the PDR algorithm to solve a general problem \(\mu F\le ^{?}\alpha \) of over-approximating the least fixed point of an \(\omega \)-continuous function \(F:L\rightarrow L\) by a safety property \(\alpha \). This lattice-theoretic generalization signifies the relationship between the PDR algorithm and the theory of fixed points. This also allows us to incorporate quantitative predicates suited for probabilistic verification.

More specifically, we reconstruct the original PDR algorithm as a combination of two constituent parts. They are called positive LT-PDR and negative LT-PDR. Positive LT-PDR comes from a witness-based proof method by the Knaster–Tarski fixed point theorem, and aims to verify \(\mu F\le ^{?}\alpha \). In contrast, negative LT-PDR comes from the Kleene fixed point theorem and aims to refute \(\mu F\le ^{?}\alpha \). The two algorithms build up witnesses in an iterative and nondeterministic manner, where nondeterminism accommodates guesses and heuristics. We identify the essence of PDR to be an ingenious combination of these two algorithms, in which intermediate results on one side (positive or negative) give informed guesses on the other side. This is how we formulate LT-PDR in Sect. 3.3.

We discuss several instances of our general theory of PDR. We discuss three concrete settings: Kripke structures (where we obtain two instances of LT-PDR), Markov decision processes (MDPs), and Markov reward models. The two in the first setting essentially subsume many existing PDR algorithms, such as the original PDR [9, 13] and Reverse PDR [25, 26], and the one for MDPs resembles PrIC3 [6]. The last one (Markov reward models) is a new algorithm that fully exploits the generality of our framework.

In fact, there is another dimension of theoretical generalization: the derivation of the above concrete instances follows a structural theory of state-based dynamics and predicate transformers. We formulate the structural theory in the language of category theory [3, 23]—using especially coalgebras [17] and fibrations [18]—following works such as [8, 15, 21, 28]. The structural theory tells us which safety problems arise under what conditions; it can therefore suggest that certain safety problems are unlikely to be formulatable, too. The structural theory is important because it builds a mathematical order in the PDR literature, in which theoretical developments tend to be closely tied to implementation and thus theoretical essences are often not very explicit. For example, the theory is useful in classifying a plethora of PDR-like algorithms for Kripke structures (the original, Reverse PDR, fbPDR, etc.). See Sect. 5.1.

We present the above structural theory in Sect. 4 and briefly discuss its use in the derivation of concrete instances in Sect. 5. We note, however, that this categorical theory is not needed for reading and using the other parts of the paper.

There are other works on generalization of PDR [16, 24], but our identification of the interplay of Knaster–Tarski and Kleene is new. They do not accommodate probabilistic verification, either. See [22, Appendix A] for further discussions.

Preliminaries. Let \((L,\le )\) be a poset. \((L,\le )^\mathrm {op}\) denotes the opposite poset \((L,\ge )\). Note that if \((L,\le )\) is a complete lattice then so is \((L,\le )^\mathrm {op}\). An \(\omega \)-chain (resp. \(\omega ^{op}\)-chain) in L is an \(\mathbb {N}\)-indexed family of increasing (resp. decreasing) elements in L. A monotone function \(F:L\rightarrow L\) is \(\omega \)-continuous (resp. \(\omega ^{op}\)-continuous) if F preserves existing suprema of \(\omega \)-chains (resp. infima of \(\omega ^\mathrm {op}\)-chains).

2 Fixed-points in Complete Lattices

Let \((L, \le )\) be a complete lattice and \(F: L \rightarrow L\) be a monotone function. When we analyze fixed points of F, pre/postfixed points play important roles.

Definition 2.1

A prefixed point of F is an element \(x \in L\) satisfying \(Fx \le x\). A postfixed point of F is an element \(x \in L\) satisfying \(x \le Fx\). We write \(\mathbf {Pre}(F)\) and \(\mathbf {Post}(F)\) for the set of prefixed points and postfixed points of F, respectively.

The following results are central in fixed point theory. They allow us to under/over-approximate the least/greatest fixed points.

Theorem 2.2

A monotone endofunction F on a complete lattice \((L, \le )\) has the least fixed point \(\mu F\) and the greatest fixed point \(\nu F\). Moreover,

  1. 1.

    (Knaster–Tarski [30]) The set of fixed points forms a complete lattice. Furthermore, \(\mu F = \bigwedge \{x \in L \mid Fx \le x\}\) and \(\nu F = \bigvee \{x \in L \mid x \le Fx\}\).

  2. 2.

    (Kleene, see e.g. [5]) If F is \(\omega \)-continuous, \(\mu F=\bigvee _{n \in \mathbb {N}} F^n\bot \). Dually, if F is \(\omega ^\mathrm {op}\)-continuous, \(\nu F = \bigwedge _{n \in \mathbb {N}} F^n\top \).    \(\square \)

Theorem 2.2.2 is known to hold for arbitrary \(\omega \)-cpos (complete lattices are their special case). A generalization of Theorem 2.2.2 is the Cousot–Cousot characterization [11], where F is assumed to be monotone (but not necessarily \(\omega \)-continuous) and we have \(\mu F=F^\kappa \bot \) for a sufficiently large, possibly transfinite, ordinal \(\kappa \). In this paper, for the algorithmic study of PDR, we assume the \(\omega \)-continuity of F. Note that \(\omega \)-continuous F on a complete lattice is necessarily monotone.

We call the \(\omega \)-chain \(\bot \le F\bot \le \cdots \) the initial chain of F and the \(\omega ^\mathrm {op}\)-chain \(\top \ge F\top \ge \cdots \) the final chain of F. These appear in Theorem 2.2.2.

Theorem 2.2.1 and 2.2.2 yield the following witness notions for proving and disproving \(\mu F \le \alpha \), respectively.

Corollary 2.3

Let \((L,\le )\) be a complete lattice and \(F:L\rightarrow L\) be \(\omega \)-continuous.

  1. 1.

    (KT) \(\mu F \le \alpha \) if and only if there is \(x\in L\) such that \(Fx \le x\le \alpha \).

  2. 2.

    (Kleene) \(\mu F \not \le \alpha \) if and only if there is \(n\in \mathbb {N}\) and \(x \in L\) such that \(x \le F^n \bot \) and \(x \not \le \alpha \).    \(\square \)

By Corollary 2.3.1, proving \(\mu F \le \alpha \) is reduced to searching for \(x\in L\) such that \(Fx \le x\le \alpha \). We call such x a KT (positive) witness. In contrast, by Corollary 2.3.2, disproving \(\mu F \le \alpha \) is reduced to searching for \(n\in \mathbb {N}\) and \(x \in L\) such that \(x \le F^n \bot \) and \(x \not \le \alpha \). We call such x a Kleene (negative) witness.

Notation 2.4

We shall use lowercase (Roman and Greek) letters for elements of L (such as \(\alpha , x\in L\)), and uppercase letters for (finite or infinite) sequences of L (such as \(X\in L^{*}\) or \(L^{\omega }\)). The i-th (or \((i-j)\)-th when subscripts are started from j) element of a sequence X is designated by a subscript: \(X_{i}\in L\).

3 Lattice-Theoretic Reconstruction of PDR

Towards the LT-PDR algorithm, we first introduce two simpler algorithms, called positive LT-PDR (Sect. 3.1) and negative LT-PDR (Sect. 3.2). The target problem of the LT-PDR algorithm is the following:

Definition 3.1

(the LFP-OA problem \(\mu F \le ^{?} \alpha \)). Let L be a complete lattice, \(F: L \rightarrow L\) be \(\omega \)-continuous, and \(\alpha \in L\). The lfp over-approximation (LFP-OA) problem asks if \(\mu F \le \alpha \) holds; the problem is denoted by \(\mu F \le ^{?} \alpha \).

Example 3.2

Consider a transition system, where S be the set of states, \(\iota \subseteq S\) be the set of initial states, \(\delta : S \rightarrow \mathcal {P}S\) be the transition relation, and \(\alpha \subseteq S\) be the set of safe states. Then letting \(L{:}{=}\mathcal {P}S\) and \(F {:}{=}\iota \cup \bigcup _{s \in (-)}\delta (s)\), the lfp over-approximation problem \(\mu F \le ^{?} \alpha \) is the problem whether all reachable states are safe. It is equal to the problem studied by the conventional IC3/PDR [9, 13].

Positive LT-PDR iteratively builds a KT witness in a bottom-up manner that positively answers the LFP-OA problem, while negative LT-PDR iteratively builds a Kleene witness for the same LFP-OA problem. We shall present these two algorithms as clear reflections of two proof principles (Corollary 2.3), each of which comes from the fundamental Knaster–Tarski and Kleene theorems.

The two algorithms build up witnesses in an iterative and nondeterministic manner. The nondeterminism is there for accommodating guesses and heuristics. We identify the essence of PDR to be an ingenious combination of these two algorithms, in which intermediate results on one side (positive or negative) give informed guesses on the other side. This way, each of the positive and negative algorithms provides heuristics in resolving the nondeterminism in the execution of the other. This is how we formulate the LT-PDR algorithm in Sect. 3.3.

The dual of LFP-OA problem is called the gfp-under-approximation problem (GFP-UA): the GFP-UA problem for a complete lattice L, an \(\omega ^\mathrm {op}\)-continuous function \(F:L\rightarrow L\) and \(\alpha \in L\) is whether the inequality \(\alpha \le \nu F\) holds or not, and is denoted by \(\alpha \le ^{?}\nu F\). It is evident that the GFP-UA problem for \((L, F, \alpha )\) is equivalent to the LFP-OA problem for \((L^\mathrm {op},F,\alpha )\). This suggests the dual algorithm called LT-OpPDR for GFP-UA problem. See Remark 3.24 later.

3.1 Positive LT-PDR: Sequential Positive Witnesses

We introduce the notion of KT\(^\omega \) witness—a KT witness (Corollary 2.3) constructed in a sequential manner. Positive LT-PDR searches for a KT\(^\omega \) witness by growing its finitary approximations (called KT sequences).

Let L be a complete lattice. We regard each element \(x\in L\) as an abstract presentation of a predicate on states. The inequality \(x\le y\) means that the predicate x is stronger than the predicate y. We introduce the complete lattice [nL] of increasing chains of length \(n\in \mathbb {N}\), whose elements are \((X_0 \le \cdots \le X_{n-1})\) in L equipped with the element-wise order. We similarly introduce the complete lattice \([\omega , L]\) of \(\omega \)-chains in L. We lift \(F:L\rightarrow L\) to \(F^\# : [\omega , L] \rightarrow [\omega , L]\) and \(F^\#_n: [n, L] \rightarrow [n, L]\) (for \(n \ge 2\)) as follows. Note that the entries are shifted.

$$\begin{aligned} \begin{aligned} F^\#(X_0 \le X_1 \le \cdots )&\;:=\; (\bot \le FX_0 \le FX_1 \le \cdots )\\[-.3em] F^\#_n(X_0 \le \cdots \le X_{n-1})\;&:=\; (\bot \le FX_0 \le \cdots \le FX_{n-2}) \end{aligned} \end{aligned}$$
(1)

Definition 3.3

(\(\mathbf {KT}^{\omega }\) witness). Let \(L,F,\alpha \) be as in Definition 3.1. Define \(\varDelta \alpha := (\alpha \le \alpha \le \cdots )\). A \(\text {KT}^{\omega }\) witness is \(X \in [\omega , L]\) such that \(F^\# X \le X \le \varDelta \alpha \).

Theorem 3.4

Let \(L,F,\alpha \) be as in Definition 3.1. There exists a KT witness (Corollary 2.3) if and only if there exists a \(\text {KT}^{\omega }\)witness.    \(\square \)

Concretely, a KT witness x yields a \(\text {KT}^{\omega }\) witness \(x\le x\le \cdots \); a \(\text {KT}^{\omega }\) witness X yields a KT witness \(\bigvee _{n \in \omega } X_{n}\). A full proof (via Galois connections) is in [22].

The initial chain \(\bot \le F\bot \le \cdots \) is always a \(\text {KT}^{\omega }\) witness for \(\mu F \le \alpha \). There are other \(\text {KT}^{\omega }\) witnesses whose growth is accelerated by some heuristic guesses—an extreme example is \(x\le x\le \cdots \) with a KT witness x. \(\text {KT}^{\omega }\) witnesses embrace the spectrum of such different sequential witnesses for \(\mu F\le \alpha \), those which mix routine constructions (i.e. application of F) and heuristic guesses.

Definition 3.5

(KT sequence). Let \(L,F,\alpha \) be as in Definition 3.1. A KT sequence for \(\mu F \le ^{?} \alpha \) is a finite chain \((X_0\le \cdots \le X_{n-1})\), for \(n\ge 2\), satisfying

  1. 1.

    \(X_{n-2} \le \alpha \); and

  2. 2.

    X is a prefixed point of \(F^\#_n\), that is, \(FX_{i}\le X_{i+1}\) for each \(i\in [0, n-2]\).

A KT sequence \((X_{0} \le \cdots \le X_{n-1})\) is conclusive if \(X_{j+1} \le X_{j}\) for some j.

KT sequences are finite by definition. Note that the upper bound \(\alpha \) is imposed on all \(X_{i}\) but \(X_{n-1}\). This freedom in the choice of \(X_{n-1}\) offers room for heuristics, one that is exploited in the combination with negative LT-PDR (Sect. 3.3).

We take KT sequences as finite approximations of \(\text {KT}^{\omega }\) witnesses. This view shall be justified by the partial order \((\preceq )\) between KT sequences defined below.

Definition 3.6

(order \(\preceq \) between KT sequences). We define a partial order relation \(\preceq \) on KT sequences as follows: \((X_0, \dots , X_{n-1}) \preceq (X'_0, \dots , X'_{m-1})\) if \(n \le m\) and \(X_j \ge X_j'\) for each \(0 \le j \le n-1\).

The order \(X_j\ge X_j'\) represents that \(X_j'\) is a stronger predicate (on states) than \(X_j\). Therefore \(X\preceq X'\) expresses that \(X'\) is a longer and stronger/more determined chain than X. We obtain \(\text {KT}^{\omega }\) witnesses as their \(\omega \)-superma.

Theorem 3.7

Let \(L,F,\alpha \) be as in Definition 3.1. The set of KT sequences, augmented with the set of \(\text {KT}^{\omega }\) witnesses \(\{X \in [\omega , L] \mid F^\# X \le X \le \varDelta \alpha \}\) and ordered by the natural extension of \(\preceq \), is an \(\omega \)-cpo. In this \(\omega \)-cpo, each \(\text {KT}^{\omega }\)witness X is represented as the suprema of an \(\omega \)-chain of KT sequences, namely \(X = \bigvee _{n \ge 2} X|_n\) where \(X|_n \in [n, L]\) is the length n prefix of X.    \(\square \)

Proposition 3.8

Let \(L,F,\alpha \) be as in Definition 3.1. There exists a \(\text {KT}^{\omega }\) witness if and only if there exists a conclusive KT sequence.

Proof

(\(\Rightarrow \)): If there exists a \(\text {KT}^{\omega }\) witness, \(\mu F \le \alpha \) holds by Corollary 2.3 and Theorem 3.4. Therefore, the “informed guess” \((\mu F \le \mu F)\) gives a conclusive KT sequence. (\(\Leftarrow \)): When X is a conclusive KT sequence with \(X_j = X_{j+1}\), \(X_0 \le \cdots \le X_j = X_{j+1} = \cdots \) is a \(\text {KT}^{\omega }\) witness.    \(\square \)

The proposition above yields the following partial algorithm that aims to answer positively to the LFP-OA problem. It searches for a conclusive KT sequence.

Definition 3.9

(positive LT-PDR). Let \(L,F,\alpha \) be as in Definition 3.1. Positive LT-PDR is the algorithm shown in Algorithm 1, which says ‘True’ to the LFP-OA problem \(\mu F \le ^{?} \alpha \) if successful.

figure c

The rules are designed by the following principles.

Valid is applied when the current X is conclusive.

Unfold extends X with \(\top \). In fact, we can use any element x satisfying \(X_{n-1} \le x\) and \(FX_{n-1} \le x\) in place of \(\top \) (by the application of Induction with x). The condition \(X_{n-1} \le \alpha \) is checked to ensure that the extended X satisfies the condition in Definition 3.5.1.

Induction strengthens X, replacing the j-th element with its meet with x. The first condition \(X_{k} \not \le x\) ensures that this rule indeed strengthens X, and the second condition \(F(X_{k-1} \wedge x) \le x\) ensures that the strengthened X satisfies the condition in Definition 3.5.2, that is, \(F^\#_n X \le X\) (see the proof in [22]).

Theorem 3.10

Let \(L,F,\alpha \) be as in Definition 3.1. Then positive LT-PDR is sound, i.e. if it outputs ‘True’ then \(\mu F \le \alpha \) holds.

Moreover, assume \(\mu F\le \alpha \) is true. Then positive LT-PDR is weakly terminating (meaning that suitable choices of x when applying Induction make the algorithm terminate).    \(\square \)

The last “optimistic termination” is realized by the informed guess \(\mu F\) as x in Induction. To guarantee the termination of LT-PDR, it suffices to assume that the complete lattice L is well-founded (no infinite decreasing chain exists in L) and there is no strictly increasing \(\omega \)-chain under \(\alpha \) in L, although we cannot hope for this assumption in every instance (Sect. 5.2, 5.3).

Lemma 3.11

Let \(L,F,\alpha \) be as in Definition 3.1. If \(\mu F \le \alpha \), then for any KT sequence X, at least one of the three rules in Algorithm 1 is enabled.

Moreover, for any KT sequence X, let \(X'\) be obtained by applying either Unfold or Induction. Then \(X\preceq X'\) and \(X\ne X'\).    \(\square \)

Theorem 3.12

Let \(L,F,\alpha \) be as in Definition 3.1. Assume that \(\le \) in L is well-founded and \(\mu F \le \alpha \). Then, any non-terminating run of positive LT-PDR converges to a \(\text {KT}^{\omega }\) witness (meaning that it gives a \(\text {KT}^{\omega }\) witness in \(\omega \)-steps). Moreover, if there is no strictly increasing \(\omega \)-chain bounded by \(\alpha \) in L, then positive LT-PDR is strongly terminating.    \(\square \)

3.2 Negative PDR: Sequential Negative Witnesses

We next introduce Kleene sequences as a lattice-theoretic counterpart of proof obligations in the standard PDR. Kleene sequences represent a chain of sufficient conditions to conclude that certain unsafe states are reachable.

Definition 3.13

(Kleene sequence). Let \(L,F,\alpha \) be as in Definition 3.1. A Kleene sequence for the LFP-OA problem \(\mu F \le ^? \alpha \) is a finite sequence \((C_0, \dots , C_{n-1})\), for \(n \ge 0\) (C is empty if \(n=0\)), satisfying

  1. 1.

    \(C_j \le FC_{j-1}\) for each \(1 \le j \le n-1\);

  2. 2.

    \(C_{n-1} \not \le \alpha \).

A Kleene sequence \((C_0, \dots , C_{n-1})\) is conclusive if \(C_0 = \bot \). We may use \(i \ (0 \le i \le n)\) instead of 0 as the starting index of the Kleene sequence C.

When we have a Kleene sequence \(C=(C_{0},\dots ,C_{n-1})\), the chain of implications \((C_{j}\le F^{j}\bot ) \implies (C_{j+1}\le F^{j+1}\bot )\) hold for \(0 \le j < n-1\). Therefore when C is conclusive, \(C_{n-1}\) is a Kleene witness (Corollary 2.3.2).

Proposition 3.14

Let \(L,F,\alpha \) be as in Definition 3.1. There exists a Kleene (negative) witness if and only if there exists a conclusive Kleene sequence.

Proof

(\(\Rightarrow \)): If there exists a Kleene witness x such that \(x \le F^n \bot \) and \(x \not \le \alpha \), \((\bot , F\bot , \dots , F^n \bot )\) is a conclusive Kleene sequence. (\(\Leftarrow \)): Assume there exists a conclusive Kleene sequence C. Then \(C_{n-1}\) satisfies \(C_{n-1} \le F^{n-1}\bot \) and \(C_{n-1} \not \le \alpha \) because of \(C_{n-1} \le FC_{n-2} \le \cdots \le F^{n-1}C_0 = F^{n-1}\bot \) and Definition 3.13.2.    \(\square \)

This proposition suggests the following algorithm to negatively answer to the LFP-OA problem. It searches for a conclusive Kleene sequence. The algorithm updates a Kleene sequence until its first component becomes \(\bot \).

Definition 3.15

(negative LT-PDR). Let \(L,F,\alpha \) be as in Definition 3.1. Negative LT-PDR is the algorithm shown in Algorithm 2, which says ‘False’ to the LFP-OA problem \(\mu F \le ^? \alpha \) if successful.

The rules are designed by the following principles.

Candidate initializes C with only one element x. The element x has to be chosen such that \(x \not \le \alpha \) to ensure Definition 3.13.2.

Model is applied when the current Kleene sequence C is conclusive.

Decide prepends x to C. The condition \(C_0 \le Fx\) ensures Definition 3.13.1.

Theorem 3.16

Let \(L,F,\alpha \) be as in Definition 3.1.

  1. 1.

    Negative LT-PDR is sound, i.e. if it outputs ‘False’ then \(\mu F \not \le \alpha \).

  2. 2.

    Assume \(\mu F \not \le \alpha \) is true. Then negative LT-PDR is weakly terminating (meaning that suitable choices of x when applying rules Candidate and Decide make the algorithm terminate).    \(\square \)

3.3 LT-PDR: Integrating Positive and Negative

We have introduced two simple PDR algorithms, called positive LT-PDR (Sect. 3.1) and negative LT-PDR (Sect. 3.2). They are so simple that they have potential inefficiencies. Specifically, in positive LT-PDR, it is unclear that how we choose \(x \in L\) in Induction, while in negative LT-PDR, it may easily diverge because the rules Candidate and Decide may choose \(x\in L\) that would not lead to a conclusive Kleene sequence. We resolve these inefficiencies by combining positive LT-PDR and negative LT-PDR. The combined PDR algorithm is called LT-PDR, and it is a lattice-theoretic generalization of conventional PDR.

Note that negative LT-PDR is only weakly terminating. Even worse, it is easy to make it diverge—after a choice of x in Candidate or Decide such that \(x\not \le \mu F\), no continued execution of the algorithm can lead to a conclusive Kleene sequence. For deciding \(\mu F \le ^? \alpha \) efficiently, therefore, it is crucial to detect such useless Kleene sequences.

The core fact that underlies the efficiency of PDR is the following proposition, which says that a KT sequence (in positive LT-PDR) can quickly tell that a Kleene sequence (in negative LT-PDR) is useless. This fact is crucially used for many rules in LT-PDR (Definition 3.20).

Proposition 3.17

Let \(C=(C_i, \dots , C_{n-1})\) be a Kleene sequence \((2 \le n, 0 < i \le n-1)\) and \(X=(X_0 \le \cdots \le X_{n-1})\) be a KT sequence. Then

  1. 1.

    \(C_i \not \le X_i\) implies that C cannot be extended to a conclusive one, that is, there does not exist \(C_0, \dots , C_{i-1}\) such that \((C_0, \dots , C_{n-1})\) is conclusive.

  2. 2.

    \(C_i \not \le F X_{i-1}\) implies that C cannot be extended to a conclusive one.

  3. 3.

    There is no conclusive Kleene sequence with length \(n-1\).    \(\square \)

The proof relies on the following lemmas.

Lemma 3.18

Any KT sequence \((X_0 \le \cdots \le X_{n-1})\) over-approximates the initial sequence: \( F^i\bot \le X_i\) holds for any i such that \(0\le i\le n-1\).    \(\square \)

Lemma 3.19

Let \(C=(C_i, \dots , C_{n-1})\) be a Kleene sequence \((0 < i \le n-1)\) and \((X_0 \le \cdots \le X_{n-1})\) be a KT sequence. The following satisfy \(1 \Leftrightarrow 2 \Rightarrow 3\).

  1. 1.

    The Kleene sequence C can be extended to a conclusive one.

  2. 2.

    \(C_i \le F^i\bot \).

  3. 3.

    \(C_i \le F^j X_{i-j}\) for each j with \(0 \le j \le i\).    \(\square \)

Using the above lattice-theoretic properties, we combine positive and negative LT-PDRs into the following LT-PDR algorithm. It is also a lattice-theoretic generalization of the original PDR algorithm. The combination exploits the mutual relationship between KT sequences and Kleene sequences, exhibited as Proposition 3.17, for narrowing down choices in positive and negative LT-PDRs.

Definition 3.20

(LT-PDR). Given a complete lattice L, an \(\omega \)-continuous function \(F: L \rightarrow L\), and an element \(\alpha \in L\), LT-PDR is the algorithm shown in Algorithm 3 for the LFP-OA problem \(\mu F \le ^? \alpha \).

The rules are designed by the following principles.

(Valid, Unfold, and Induction): These rules are almost the same as in positive LT-PDR. In Unfold, we reset the Kleene sequence because of Proposition 3.17.3. Occurrences of Unfold punctuate an execution of the algorithm: between two occurrences of Unfold, a main goal (towards a negative conclusion) is to construct a conclusive Kleene sequence with the same length as the X.

(Candidate, Model, and Decide): These rules have many similarities to those in negative LT-PDR. Differences are as follows: the Candidate and Decide rules impose \(x \le X_i\) on the new element x in \((x, C_{i+1}, \dots , C_{n-1})\) because Proposition 3.17.1 tells us that other choices are useless. In Model, we only need to check whether \(C_1\) is defined instead of \(C_0 = \bot \). Indeed, since \(C_1\) is added in Candidate or Decide, \(C_1 \le X_1 = F\bot \) always holds. Therefore, \(2 \Rightarrow 1\) in Lemma 3.19 shows that \((\bot , C_1, \dots , C_{n-1})\) is conclusive.

(Conflict): This new rule emerges from the combination of positive and negative LT-PDRs. This rule is applied when \(C_i \not \le FX_{i-1}\), which confirms that the current C cannot be extended to a conclusive one (Proposition 3.17.2). Therefore, we eliminate \(C_i\) from C and strengthen X so that we cannot choose \(C_i\) again, that is, so that \(C_i \not \le (X_i \wedge x)\). Let us explain how X is strengthened. The element x has to be chosen so that \(C_i \not \le x\) and \(F(X_{i-1} \wedge x) \le x\). The former dis-inequality ensures the strengthened X satisfies \(C_i \not \le (X_i \wedge x)\), and the latter inequality implies \(F(X_{i-1} \wedge x) \le x\). One can see that Conflict is Induction with additional condition \(C_i \not \le x\), which enhances so that the search space for x is narrowed down using the Kleene sequence C.

Canonical choices of \(x\in L\) in Candidate, Decide, and Conflict are \(x:=X_{n-1}\), \(x:=X_{i-1}\), and \(x:=FX_{i-1}\), respectively. However, there can be cleverer choices; e.g. \(x:= S\setminus (C_i\setminus FX_{i-1})\) in Conflict when \(L=\mathcal {P}S\).

Lemma 3.21

Each rule of LT-PDR, when applied to a pair of a KT and a Kleene sequence, yields a pair of a KT and a Kleene sequence.    \(\square \)

Theorem 3.22

(correctness). LT-PDR is sound, i.e. if it outputs ‘True’ then \(\mu F \le \alpha \) holds, and if it outputs ‘False’ then \(\mu F \not \le \alpha \) holds.    \(\square \)

Many existing PDR algorithms ensure termination if the state space is finite. A general principle behind is stated below. Note that it rarely applies to infinitary or quantitative settings, where we would need some abstraction for termination.

Proposition 3.23

(termination). LT-PDR terminates regardless of the order of the rule-applications if the following conditions are satisfied.

  1. 1.

    Valid and Model rules are immediately applied if applicable.

  2. 2.

    \((L,\le )\) is well-founded.

  3. 3.

    Either of the following is satisfied: a) \(\mu F \le \alpha \) and \((L, \le )\) has no strictly increasing \(\omega \)-chain bounded by \(\alpha \), or b) \(\mu F \not \le \alpha \).    \(\square \)

Cond 1 is natural: it just requires LT-PDR to immediately conclude ‘True’ or ‘False’ if it can. Cond. 23 are always satisfied when L is finite.

Theorem 3.22 and Proposition 3.23 still hold if Induction rule is dropped. However, the rule can accelerate the convergence of KT sequences and improve efficiency.

Remark 3.24

(LT-OpPDR). The GFP-UA problem \(\alpha \le ^{?}\nu F\) is the dual of LFP-OA, obtained by opposing the order \(\le \) in L. We can also dualize the LT-PDR algorithm (Algorithm 3), obtaining what we call the LT-OpPDR algorithm for GFP-UA. Moreover, we can express LT-OpPDR as LT-PDR if a suitable involution \(\lnot :L\rightarrow L^{\mathrm {op}}\) is present. See [22, Appendix B] for further details; see also Proposition 4.3.

4 Structural Theory of PDR by Category Theory

Before we discuss concrete instances of LT-PDR in Sect. 5, we develop a structural theory of transition systems and predicate transformers as a basis of LT-PDR. The theory is formulated in the language of category theory [3, 17, 18, 23]. We use category theory because 1) categorical modeling of relevant notions is well established in the community (see e.g. [2, 8, 17, 18, 27]), and 2) it gives us the right level of abstraction that accommodates a variety of instances. In particular, qualitative and quantitative settings are described in a uniform manner.

Table 1. Categorical modeling of state-based dynamics and predicate transformers

Our structural theory (Sect. 4) serves as a backend, not a frontend. That is,

  • the theory in Sect. 4 is important in that it explains how the instances in Sect. 5 arise and why others do not, but

  • the instances in Sect. 5 are described in non-categorical terms, so readers who skipped Sect. 4 will have no difficulties following Sect. 5 and using those instances.

4.1 Categorical Modeling of Dynamics and Predicate Transformers

Our interests are in instances of the LFP-OA problem \(\mu F \le ^{?} \alpha \) (Definition 3.1) that appear in model checking. In this context, 1) the underlying lattice L is that of predicates over a state space, and 2) the function \(F:L\rightarrow L\) arises from the dynamic/transition structure, specifically as a predicate transformer. The categorical notions in Table 1 model these ideas (state-based dynamics, predicate transformers). This modeling is well-established in the community.

Our introduction of Table 1 here is minimal, due to the limited space. See [22, Appendix C] and the references therein for more details.

A category consists of objects and arrows between them. In Table 1, categories occur twice: 1) a base category \(\mathbb {B}\) where objects are typically sets and arrows are typically functions; and 2) fiber categories \(\mathbb {E}_{S}\), defined for each object S of \(\mathbb {B}\), that are identified with the lattices of predicates. Specifically, objects \(P,Q,\dotsc \) of \(\mathbb {E}_{S}\) are predicates over S, and an arrow \(P\rightarrow Q\) represents logical implication. A general fact behind the last is that every preorder is a category—see e.g. [3].

Transition Systems as Coalgebras. State-based transition systems are modeled as coalgebras in the base category \(\mathbb {B}\) [17]. We use a functor \(G:\mathbb {B}\rightarrow \mathbb {B}\) to represent a transition type. A G-coalgebra is an arrow \(\delta :S\rightarrow GS\), where S is a state space and \(\delta \) describes the dynamics. For example, a Kripke structure can be identified with a pair \((S,\delta )\) of a set S and a function \(\delta :S\rightarrow \mathcal {P}S\), where \(\mathcal {P}S\) denotes the powerset. The powerset construction \(\mathcal {P}\) is known to be a functor \(\mathcal {P}:\mathbf {Set}\rightarrow \mathbf {Set}\); therefore Kripke structures are \(\mathcal {P}\)-coalgebras. For other choices of G, G-coalgebras become different types of transition systems, such as MDPs (Sect. 5.2) and Markov Reward Models (Sect. 5.3).

Predicates Form a Fibration. Fibrations are powerful categorical constructs that can model various indexed entities; see e.g. [18] for its general theory. Our use of them is for organizing the lattices \(\mathbb {E}_{S}\) of predicates over a set S, indexed by the choice of S. For example, \(\mathbb {E}_{S}=2^{S}\)—the lattice of subsets of S—for modeling qualitative predicates. For quantitative reasoning (e.g. for MDPs), we use \(\mathbb {E}_{S}=[0,1]^{S}\), where [0, 1] is the unit interval. This way, qualitative and quantitative reasonings are mathematically unified in the language of fibrations.

A fibration is a functor \(p:\mathbb {E}\rightarrow \mathbb {B}\) with suitable properties; it can be thought of as a collection \((\mathbb {E}_{S})_{S\in \mathbb {B}}\) of fiber categories \(\mathbb {E}_{S}\)—indexed by objects S of \(\mathbb {B}\)—suitably organized as a single category \(\mathbb {E}\). Notable in this organization is that we obtain the pullback functor \(l^{*}:\mathbb {E}_{Y}\rightarrow \mathbb {E}_{X}\) for each arrow \(l:X\rightarrow Y\) in \(\mathbb {B}\). In our examples, \(l^{*}\) is a substitution along l in predicates—\(l^{*}\) is the monotone map that carries a predicate P(y) over Y to the predicate P(l(x)) over X.

In this paper, we restrict to a subclass of fibrations (called \({\boldsymbol{CLat}}_{\wedge }\)-fibrations) in which every fiber category \(\mathbb {E}_{S}\) is a complete lattice, and each pullback functor preserves all meets. We therefore write \(P\le Q\) for arrows in \(\mathbb {E}_{S}\); this represents logical implication, as announced above. Notice that each \(f^*\) has a left adjoint (lower adjoint in terms of Galois connection), which exists by Freyd’s adjoint functor theorem. The left adjoint is denoted by \(f_*\).

figure d

We also consider a lifting \(\dot{G}:\mathbb {E}\rightarrow \mathbb {E}\) of G along p; it is a functor \(\dot{G}\) such that \(p\dot{G}=Gp\). See the diagram on the right. It specifies the logical interpretation of the transition type G. For example, for \(G=\mathcal {P}\) (the powerset functor) from the above, two choices of \(\dot{G}\) are for the may and must modalities. See e.g. [2, 15, 20, 21].

Categorical Predicate Transformer. The above constructs allow us to model predicate transformers—F in our examples of the LFP-OA problem \(\mu F \le ^{?} \alpha \)—in categorical terms. A predicate transformer along a coalgebra \(\delta :S\rightarrow GS\) with respect to the lifting \(\dot{G}\) is simply the composite \( \mathbb {E}_{S}\xrightarrow {\dot{G}} \mathbb {E}_{GS}\xrightarrow {\delta ^{*}} \mathbb {E}_{S} \), where the first \(\dot{G}\) is the restriction of \(\dot{G}:\mathbb {E}\rightarrow \mathbb {E}\) to \(\mathbb {E}_{S}\). Intuitively, 1) given a postcondition P in \(\mathbb {E}_{S}\), 2) it is first interpreted as the predicate \(\dot{G}P\) over GS, and then 3) it is pulled back along the dynamics \(\delta \) to yield a precondition \(\delta ^{*}\dot{G}P\). Such (backward) predicate transformers are fundamental in a variety of model checking problems.

4.2 Structural Theory of PDR from Transition Systems

We formulate a few general safety problems. We show how they are amenable to the LT-PDR (Definition 3.20) and LT-OpPDR (Remark 3.24) algorithms.

Definition 4.1

(backward safety problem, BSP). Let p be a CLat \(_{\wedge }\)-fibration, \(\delta : S \rightarrow GS\) be a coalgebra in \(\mathbb {B}\), and \(\dot{G}: \mathbb {E} \rightarrow \mathbb {E}\) be a lifting of G along p such that \(\dot{G}_X:\mathbb {E}_X\rightarrow \mathbb {E}_{GX}\) is \(\omega ^\mathrm {op}\)-continuous for each \(X\in \mathbb {B}\). The backward safety problem for \((\iota \in \mathbb {E}_S, \delta , \alpha \in \mathbb {E}_S)\) in \((p, G, \dot{G})\) is the GFP-UA problem for \((\mathbb {E}_S, \alpha \wedge \delta ^*\dot{G}, \iota )\), that is,

$$\begin{aligned} \iota \;\le ^?\; \nu x.\, \alpha \wedge \delta ^*\dot{G} x. \end{aligned}$$
(2)

Here, \(\iota \) represents the initial states and \(\alpha \) represents the safe states. The predicate transformer \(x\mapsto \alpha \wedge \delta ^*\dot{G}x\) in (2) is the standard one for modeling safety—currently safe (\(\alpha \)), and the next time x (\(\delta ^*\dot{G}x\)). Its gfp is the safety property; (2) asks if all initial states (\(\iota \)) satisfy the safety property. Since the backward safety problem is a GFP-UA problem, we can solve it by LT-OpPDR (Remark 3.24).

figure e

Additional assumptions allow us to reduce the backward safety problem to LFP-OA problems, which are solvable by LT-PDR, as shown on the right.

The first case requires the existence of the left adjoint to the predicate transformer \(\delta ^*\dot{G}_S:\mathbb {E}_S\rightarrow \mathbb {E}_S\). Then we can translate BSP to the following LFP-OA problem. It directly asks whether all reachable states are safe.

Proposition 4.2

(forward safety problem, FSP). In the setting of Definition 4.1, assume that each \(\dot{G}_X: \mathbb {E}_{X} \rightarrow \mathbb {E}_{GX}\) preserves all meets. Then by letting \(\dot{H}_S: \mathbb {E}_{GS} \rightarrow \mathbb {E}_{S}\) be the left adjoint of \(\dot{G}_S\), the BSP (2) is equivalent to the LFP-OA problem for \((\mathbb {E}_S,\iota \vee \dot{H}_S\delta _*,\alpha )\):

$$\begin{aligned} \mu x.\,\iota \vee \dot{H}_S\delta _*x\; \le ^?\; \alpha . \end{aligned}$$
(3)

This problem is called the forward safety problem for \((\iota , \delta , \alpha )\) in \((p, G, \dot{G})\).    \(\square \)

The second case assumes that the complete lattice \(\mathbb {E}_S\) of predicates admits an involution operator \(\lnot :\mathbb {E}_S\rightarrow \mathbb {E}_S^\mathrm {op}\) (cf. [22, Appendix B]).

Proposition 4.3

(inverse backward safety problem, IBSP). In the setting of Definition 4.1, assume further that there is a monotone function \(\lnot : \mathbb {E}_S \rightarrow \mathbb {E}^\mathrm {op}_S\) satisfying \(\lnot \circ \lnot = \mathrm {id}\). Then the backward safety problem (2) is equivalent to the LFP-OA problem for \((\mathbb {E}_S,(\lnot \alpha )\vee (\lnot \circ \delta ^*\dot{G}\circ \lnot ),\lnot \iota )\), that is,

$$\begin{aligned} \mu x.\,(\lnot \alpha ) \vee (\lnot \circ \delta ^*\dot{G}\circ \lnot x)\; \le ^{?}\; \lnot \iota . \end{aligned}$$
(4)

We call (4) the inverse backward safety problem for \((\iota , \delta , \alpha )\) in \((p, G, \dot{G})\). Here \((\lnot \alpha ) \vee (\lnot \circ \delta ^*\dot{G}\circ \lnot (-))\) is the inverse backward predicate transformer.    \(\square \)

When both additional assumptions are fulfilled (in Proposition 4.2 and 4.3), we obtain two LT-PDR algorithms to solve BSP. One can even simultaneously run these two algorithms—this is done in fbPDR [25, 26]. See also Sect. 5.1.

5 Known and New PDR Algorithms as Instances

We present several concrete instances of our LT-PDR algorithms. The one for Markov reward models is new (Sect. 5.3). We also sketch how those instances can be systematically derived by the theory in Sect. 4; details are in [22, Appendix D].

5.1 LT-PDRs for Kripke Structures: \({\textbf {PDR}}^{{\textbf {F-Kr}}}\)and \({\textbf {PDR}}^{{\textbf {IB-Kr}}}\)

In most of the PDR literature, the target system is a Kripke structure that arises from a program’s operational semantics. A Kripke structure consists of a set S of states and a transition relation \(\delta \subseteq S\times S\) (here we ignore initial states and atomic propositions). The basic problem formulation is as follows.

Definition 5.1

(backward safety problem (BSP) for Kripke structures). The BSP for a Kripke structure \((S,\delta )\), a set \(\iota \in 2^S\) of initial states, and a set \(\alpha \in 2^S\) of safe states, is the GFP-UA problem \( \iota \,\le ^?\, \nu x.\, \alpha \wedge F' x, \) where \(F':2^{S}\rightarrow 2^{S}\) is defined by \(F'(A)\mathbin {:=}\{s\mid \forall s'.\, ((s,s')\in \delta \Rightarrow s'\in A)\}\).

It is clear that the GFP in Definition 5.1 represents the set of states from which all reachable states are in \(\alpha \). Therefore the BSP is the usual safety problem.

The above BSP is easily seen to be equivalent to the following problems.

Proposition 5.2

(forward safety problem (FSP) for Kripke structures). The BSP in Definition 5.1 is equivalent to the LFP-OA problem \( \mu x.\, \iota \vee F''x\, \le ^? \,\alpha \), where \(F'':2^{S}\rightarrow 2^{S}\) is defined by \(F''(A)\mathbin {:=}\bigcup _{s\in A}\{s'\mid (s,s')\in \delta \}\).    \(\square \)

Proposition 5.3

(inverse backward safety problem (IBSP) for Kripke structures). The BSP in Definition 5.1 is equivalent to the LFP-OA problem \( \mu x.\, \lnot \alpha \vee \lnot F'(\lnot x) \,\le ^?\, \lnot \iota \), where \(\lnot :2^{S}\rightarrow 2^{S}\) is the complement function \(A\mapsto S\setminus A\).    \(\square \)

Instances of LT-PDR. The FSP and IBSP (Propositions 5.25.3), being LFP-OA, are amenable to the LT-PDR algorithm (Definition 3.20). Thus we obtain two instances of LT-PDR; we call them \(\boldsymbol{PDR}^{\boldsymbol{F\text {-}Kr}}\) and \(\boldsymbol{PDR}^{\boldsymbol{IB\text {-}Kr}}\). \({\textbf {PDR}}^{{\textbf {IB-Kr}}}\) is a step-by-step dual to the application of LT-OpPDR to the BSP (Definition 5.1)—see Remark 3.24.

We compare these two instances of LT-PDR with algorithms in the literature. If we impose \(|C_i|=1\) on each element \(C_i\) of Kleene sequences, the \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) instance of LT-PDR coincides with the conventional IC3/PDR [9, 13]. In contrast, \({\textbf {PDR}}^{{\textbf {IB-Kr}}}\) coincides with Reverse PDR in [25, 26]. The parallel execution of \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) and \({\textbf {PDR}}^{{\textbf {IB-Kr}}}\) roughly corresponds to fbPDR [25, 26].

Structural Derivation. The equivalent problems (Propositions 5.25.3) are derived systematically from the categorical theory in Sect. 4.2. Indeed, using a lifting \(\dot{\mathcal {P}}:2^{S} \rightarrow 2^{\mathcal {P}S}\) such that \(A\mapsto \{A'\mid A'\subseteq A\}\) (the must modality \(\Box \)), \(F'\) in Definition 5.1 coincides with \(\delta ^{*}\dot{\mathcal {P}}\) in (2). The above \(\dot{\mathcal {P}}\) preserves meets (cf. the modal axiom \(\Box (\varphi \wedge \psi )\cong \Box \varphi \wedge \Box \psi \), see e.g. [7]); thus Proposition 4.2 derives the FSP. Finally, \(\lnot \) in Proposition 5.3 allows the use of Proposition 4.3. More details are in [22, Appendix D].

5.2 LT-PDR for MDPs: \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\)

The only known PDR-like algorithm for quantitative verification is PrIC3 [6] for Markov decision processes s(MDPs). Here we instantiate LT-PDR for MDPs and compare it with PrIC3.

An MDP consists of a set S of states, a set \(\mathrm {Act}\) of actions and a transition function \(\delta \) mapping \(s \in S\) and \(a \in \mathrm {Act}\) to either \(*\) (“the action a is unavailable at s”) or a probability distribution \(\delta (s)(a)\) over S.

Definition 5.4

(IBSP for MDPs). The inverse backward safety problem (IBSP) for an MDP \((S,\delta )\), an initial state \(s_{\iota }\in S\), a real number \(\lambda \in [0,1]\), and a set \(\alpha \subseteq S\) of safe states, is the LFP-OA problem \( \mu x.\, F'( x) \;\le ^?\; d_{\iota ,\lambda } \). Here \(d_{\iota ,\lambda }:S\rightarrow [0,1]\) is the predicate such that \(d_{\iota ,\lambda }(s_{\iota })=\lambda \) and \(d_{\iota ,\lambda }(s)=1\) otherwise. \(F':[0,1]^{S}\rightarrow [0,1]^{S}\) is defined by \( F' (d)(s) =1\) if \(s\not \in \alpha \), and \( F' (d)(s) = \max \{\sum _{s' \in S} d(s') \cdot \delta (s)(a)(s') \mid a \in \mathrm {Act}, \delta (s)(a) \ne *\}\) if \(s\in \alpha \).

The function \(F'\) in Definition 5.4 is a Bellman operator for MDPs—it takes the average of d over \(\delta (s)(a)\) and takes the maximum over a. Therefore the lfp in Definition 5.4 is the maximum reachability probability to \(S\setminus \alpha \); the problem asks if it is \(\le \lambda \). In other words, it asks whether the safety probability—of staying in \(\alpha \) henceforth, under any choices of actions—is \(\ge 1-\lambda \). This problem is the same as in [6].

Instance of PDR. The IBSP (Definition 5.4) is LFP-OA and thus amenable to LT-PDR. We call this instance \(\boldsymbol{PDR}^{\boldsymbol{IB\text {-}MDP}}\); See [22, Appendix E] for details.

\({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) shares many essences with PrIC3 [6]. It uses the operator \(F'\) in Definition 5.4, which coincides with the one in [6, Def. 2]. PrIC3 maintains frames; they coincide with KT sequences in \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\).

Our Kleene sequences correspond to obligations in PrIC3, modulo the following difference. Kleene sequences aim at a negative witness (Sect. 3.2), but they happen to help the positive proof efforts too (Sect. 3.3); obligations in PrIC3 are solely for accelerating the positive proof efforts. Thus, if PrIC3 cannot solve these efforts, we need to check whether obligations yield a negative witness.

Structural Derivation. One can derive the IBSP (Definition 5.4) from the categorical theory in Sect. 4.2. Specifically, we first formulate the BSP \( \lnot d_{\lambda } \;\le ^?\; \nu x.\, d_\alpha \wedge \delta ^*\dot{G}x \), where \(\dot{G}\) is a suitable lifting (of G for MDPs, Table 1) that combines average and minimum, \(\lnot :[0,1]^{S}\rightarrow [0,1]^{S}\) is defined by \((\lnot d)(s)\mathbin {:=}1-d(s)\), and \(d_{\alpha }\) is such that \(d_{\alpha }(s)=1\) if \(s\in \alpha \) and \(d_{\alpha }(s)=0\) otherwise. Using \(\lnot :[0,1]^{S}\rightarrow [0,1]^{S}\) in the above as an involution, we apply Proposition 4.3 and obtain the IBSP (Definition 5.4).

Another benefit of the categorical theory is that it can tell us a forward instance of LT-PDR (much like \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) in Sect. 5.1) is unlikely for MDPs. Indeed, we showed in Proposition 4.2 that \(\dot{G}'s\) preservation of meets is essential (existence of a left adjoint is equivalent to meet preservation). We can easily show that our \(\dot{G}\) for MDPs does not preserve meets. See [22, Appendix G].

5.3 LT-PDR for Markov Reward Models: \({\textbf {PDR}}^{{\textbf {MRM}}}\)

We present a PDR-like algorithm for Markov reward models (MRMs), which seems to be new, as an instance of LT-PDR. An MRM consists of a set S of states and a transition function \(\delta \) that maps \(s \in S\) (the current state) and \(c \in \mathbb {N}\) (the reward) to a function \(\delta (s)(c): S \rightarrow [0, 1]\); the last represents the probability distribution of next states.

We solve the following problem. We use \([0,\infty ]\)-valued predicates—representing accumulated rewards—where \([0,\infty ]\) is the set of extended nonnegative reals.

Definition 5.5

(SP for MRMs). The safety problem (SP) for an MRM \((S,\delta )\), an initial state \(s_{\iota }\in S\), \(\lambda \in [0,\infty ]\), and a set \(\alpha \subseteq S\) of safe states is \( \mu x.\, F'( x) \,\le ^?\, d_{\iota ,\lambda } \). Here \(d_{\iota ,\lambda }:S\rightarrow [0,\infty ]\) maps \(s_{\iota }\) to \(\lambda \) and others to \(\infty \), and \(F':[0,\infty ]^{S}\rightarrow [0,\infty ]^{S}\) is defined by \( F'(d)(s) = 0 \) if \(s\not \in \alpha \), and \( F'(d)(s) = \sum _{s' \in S,c \in \mathbb {N}} (c+d(s')) \cdot \delta (s)(c)(s') \) if \(s\in \alpha \).

The function \(F'\) accumulates expected reward in \(\alpha \). Thus the problem asks if the expected accumulated reward, starting from \(s_{\iota }\) and until leaving \(\alpha \), is \(\le \lambda \).

Instance of PDR. The SP (Definition 5.5) is LFP-OA thus amenable to LT-PDR. We call this instance \(\boldsymbol{PDR}^{\boldsymbol{MRM}}\). It seems new. See [22, Appendix F] for details.

Structural Derivation. The function \(F'\) in Definition 5.5 can be expressed categorically as \(F'(x)=d_\alpha \wedge \delta ^*\dot{G}(x)\), where \(d_{\alpha }:S\rightarrow [0,\infty ]\) carries \(s\in \alpha \) to \(\infty \) and \(s\not \in \alpha \) to 0, and \(\dot{G}\) is a suitable lifting that accumulates expected reward. However, the SP (Definition 5.5) is not an instance of the three general safety problems in Sect. 4.2. Consequently, we expect that other instances of LT-PDR than \({\textbf {PDR}}^{{\textbf {MRM}}}\) (such as \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) and \({\textbf {PDR}}^{{\textbf {IB-Kr}}}\) in Sect. 5.1) are hard for MRMs.

6 Implementation and Evaluation

Implementation. We implemented LT-PDR in Haskell. Exploiting Haskell’s language features, it is succinct (\(\sim \)50 lines) and almost a literal translation of Algorithm 3 to Haskell. Its main part is presented in [22, Appendix K]. In particular, using suitable type classes, the code is as abstract and generic as Algorithm 3.

Specifically, our implementation is a Haskell module named . It has two interfaces, namely the type class (the lattice of predicates) and the type (the definitions of Candidate, Decide, and Conflict). The main function for LT-PDR is , where the second argument is for a monotone function F of type and the last is for the safety predicate \(\alpha \).

Obtaining concrete instances is easy by fixing \(\tau \) and . A simple implementation of \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) takes 15 lines; a more serious SAT-based one for \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) takes \(\sim \)130 lines; \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) and \({\textbf {PDR}}^{{\textbf {MRM}}}\) take \(\sim \)80 lines each.

Heuristics. We briefly discuss the heuristics, i.e. how to choose \(x \in L\) in Candidate, Decide, and Conflict, used in our experiments. The heuristics of \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) is based on the conventional PDR [9]. The heuristics of \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) is based on the idea of representing the smallest possible x greater than some real number \(v \in [0, 1]\) (e.g. x taken in Candidate) as \(x=v+\epsilon \), where \(\epsilon \) is a symbolic variable. This implies that Unfold (or Valid, Model) is always applied in finite steps, which further guarantees finite-step termination for invalid cases and \(\omega \)-step termination for valid cases (see [22, Appendix H] for more detail). The heuristics of \({\textbf {PDR}}^{{\textbf {MRM}}}\) is similar to that of \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\).

Experiment Setting. We experimentally assessed the performance of instances of . The settings are as follows: 1.2 GHz Quad-Core Intel Core i7 with 10 GB memory using Docker, for \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\); Apple M1 Chip with 16 GB memory for the other. The different setting is because we needed Docker to run PrIC3 [6].

Experiments with \({\textbf {PDR}}^{{\textbf {MRM}}}\). Table 2a shows the results. We observe that \({\textbf {PDR}}^{{\textbf {MRM}}}\) answered correctly, and that the execution time is reasonable. Further performance analysis (e.g. comparison with [19]) and improvement is future work; the point here, nevertheless, is the fact that we obtained a reasonable MRM model checker by adding \(\sim \)80 lines to the generic solver .

Experiments with \({\textbf {PDR}}^{{\textbf {IB}}\text {-}{} {\textbf {MDP}}}\). Table 2c shows the results. Both PrIC3 and our \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) solve a linear programming (LP) problem in Decide. PrIC3 uses Z3 for this; \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) uses GLPK. PrIC3 represents an MDP symbolically, while \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) do so concretely. Symbolic representation in \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) is possible—it is future work. PrIC3 can use four different interpolation generalization methods, leading to different performance (Table 2c).

We observe that \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) outperforms PrIC3 for some benchmarks with smaller state spaces. We believe that the failure of \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\) in many instances can be attributed to our current choice of a generalization method (it is the closest to the linear one for PrIC3). Table 2c suggests that use of polynomial or hybrid can enhance the performance.

Experiments with \({\textbf {PDR}}^{{\textbf {F}}\text {-}{} {\textbf {Kr}}}\). Table 2b shows the results. The benchmarks are mostly from the HWMCC’15 competition [1], except for latch0.smvFootnote 1 and counter.smv (our own).

IC3ref vastly outperforms \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) in many instances. This is hardly a surprise—IC3ref was developed towards superior performance, while \({\textbf {PDR}}^{{\textbf {F-Kr}}}\)’s emphasis is on its theoretical simplicity and genericity. We nevertheless see that \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) solves some benchmarks of substantial size, such as power2bit8.smv. This demonstrates the practical potential of LT-PDR, especially in view of the following improvement opportunities (we will pursue them as future work): 1) use of well-developed SAT solvers (we currently use toysolverFootnote 2 for its good interface but we could use Z3); 2) allowing \(|C_i| > 1\), a technique discussed in Sect. 5.1 and implemented in IC3ref but not in \({\textbf {PDR}}^{{\textbf {F-Kr}}}\); and 3) other small improvements, e.g. in our CNF-based handling of propositional formulas.

Ablation Study. To assess the value of the key concept of PDR (namely the positive-negative interplay between the Knaster–Tarski and Kleene theorems (Sect. 3.3)), we compared \({\textbf {PDR}}^{{\textbf {F-Kr}}}\) with the instances of positive and negative LT-PDR (Sects. 3.13.2) for Kripke structures.

Table 2d shows the results. Note that the value of the positive-negative interplay is already theoretically established; see e.g. Proposition 3.17 (the interplay detects executions that lead to nowhere). This value was also experimentally witnessed: see power2bit8.smv and simpleTrans.smv, where the one-sided methods made wrong choices and timed out. One-sided methods can be efficient when they get lucky (e.g. in counter.smv). LT-PDR may be slower because of the overhead of running two sides, but that is a trade-off for the increased chance of termination.

Discussion. We observe that all of the studied instances exhibited at least reasonable performance. We note again that detailed performance analysis and improvement is out of our current scope. Being able to derive these model checkers, with such a small effort as \(\sim \)100 lines of Haskell code each, demonstrates the value of our abstract theory and its generic Haskell implementation .

Table 2. experimental results for our \({\textbf {PDR}}^{{\textbf {F-Kr}}}\), \({\textbf {PDR}}^{{\textbf {IB-MDP}}}\), and \({\textbf {PDR}}^{{\textbf {MRM}}}\)

7 Conclusions and Future Work

We have presented a lattice-theoretic generalization of the PDR algorithm called LT-PDR. This involves the decomposition of the PDR algorithm into positive and negative ones, which are tightly connected to the Knaster–Tarski and Kleene fixed point theorems, respectively. We then combined it with the coalgebraic and fibrational theory for modeling transition systems with predicates. We instantiated it with several transition systems, deriving existing PDR algorithms as well as a new one over Markov reward models. We leave instantiating our LT-PDR and categorical safety problems to derive other PDR-like algorithms, such as PDR for hybrid systems [29], for future work.

We will also work on the combination of our work and the theory of abstract interpretation [10, 12]. Our current framework axiomatizes what is needed of heuristics, but it does not tell how to realize such heuristics (that differ a lot in different concrete settings). We expect abstract interpretation to provide some general recipes for realizing such heuristics.