Keywords

1 Introduction

Probabilistic programs [26, 43, 48] augment deterministic programs with stochastic behaviors, e.g., random sampling, probabilistic choice, and conditioning (via posterior observations). Probabilistic programs have undergone a recent surge of interest due to prominent applications in a wide range of domains: they steer autonomous robots and self-driving cars [20, 54], are key to describe security [6] and quantum [61] mechanisms, intrinsically code up randomized algorithms for solving NP-hard or even deterministically unsolvable problems (in, e.g., distributed computing [2, 53]), and are rapidly encroaching on AI as well as approximate computing [13]. See [5] for recent advancements in probabilistic programming.

The crux of probabilistic programming, à la Hicks’ interpretation [30], is to treat normal-looking programs as if they were probability distributions. A random-number generator, for instance, is a probabilistic program that produces a uniform distribution across numbers from a range of interest. Such a lift from deterministic program states to possibly infinite-support distributions (over states) renders the verification problem of probabilistic programs notoriously hard [39]. In particular, reasoning about probabilistic loops often amounts to computing quantitative fixed-points which are highly intractable in practice. As a consequence, existing techniques are mostly concerned with approximations, i.e., they strive for verifying or obtaining upper and/or lower bounds on various quantities like assertion-violation probabilities [59], preexpectations [9, 28], moments [58], expected runtimes [40], and concentrations [15, 16], which reveal only partial information about the probability distribution carried by the program.

In this paper, we address the problem of how to determine whether a (possibly infinite-state) probabilistic program yields exactly the desired (possibly infinite-support) distribution under all possible inputs. We highlight two scenarios where encoding the exact distribution – other than (bounds on) the above-mentioned quantities – is of particular interest: (I) In many safety- and/or security-critical domains, e.g., cryptography, a slightly perturbed distribution (while many of its probabilistic quantities remain unchanged) may lead to significant attack vulnerabilities or even complete compromise of the cryptographic system, see, e.g., Bleichenbacher’s biased-nonces attack [29, Sect. 5.10] against the probabilistic Digital Signature Algorithm. Therefore, the system designer has to impose a complete specification of the anticipated distribution produced by the probabilistic component. (II) In the context of quantitative verification, the user may be interested in multiple properties (of different types, e.g., the aforementioned quantities) of the output distribution carried by a probabilistic program. In absence of the exact distribution, multiple analysis techniques – tailored to different types of properties – have to be applied in order to answer all queries from the user. We further motivate our problem using a concrete example as follows.

Example 1

(Photorealistic Rendering [37]). Monte Carlo integration algorithms form a well-known class of probabilistic programs which approximate complex integral expressions by sampling [27]. One of its particular use-cases is the photorealistic rendering of virtual scenes by a technique called Monte Carlo path tracing (MCPT) [37].

MCPT works as follows: For every pixel of the output image, it shoots n sample rays into the scene and models the light transport behavior to approximate the incoming light at that particular point. Starting from a certain pixel position, MCPT randomly chooses a direction, traces it until a scene object is hit, and then proceeds by either (i) terminating the tracing and evaluating the overall ray, or (ii) continuing the tracing by computing a new direction. In the physical world, the light ray may be reflected arbitrarily often and thus stopping the tracing after a certain amount of bounces would introduce a bias in the integral estimation. As a remedy, the decision when to stop the tracing is made in a Russian roulette manner by flipping a coinFootnote 1 at each intersection point [1].

The program in Fig. 1 is an implementation of a simplified MCPT path generator. The cumulative length of all \(\mathtt {n}\) rays is stored in the (random) variable \(\mathtt {c}\), which is directly proportional to MCPT’s expected runtime. The implementation is designed in a way that \(\mathtt {c}\) induces a distribution as the sum of \(\mathtt {n}\) independent and identically distributed (i.i.d.) geometric random variables such that the resulting integral estimation is unbiased. In our framework, we view such an exact output distribution of \(\mathtt {c}\) as a specification and verify – fully automatically – that the implementation in Fig. 1 with nested loops indeed satisfies this specification.    \(\lhd \)

Fig. 1.
figure 1

Monte Carlo path tracing in a scene with constant reflectivity \({1/2}\).

Approach. Given a probabilistic loop \(L = {\texttt {while}}\,(\varphi )\,\{P\}\) with guard \(\varphi \) and loop-free body P, we aim to determine whether L agrees with a specification S:

figure a

namely, whether L yields – upon termination – exactly the same distribution as encoded by S under all possible program inputs. This problem is non-trivial: (C1) L may induce an infinite state space and infinite-support distributions, thus making techniques like probabilistic bounded model checking [34] insufficient for verifying the property by means of unfolding the loop L. (C2) There is, to the best of our knowledge, a lack of non-trivial characterizations of L and S such that problem (\(\star \)) admits a decidability result. (C3) To decide problem (\(\star \)) – even for a loop-free program L – one has to account for infinitely or even uncountably many inputs such that L yields the same distribution as encoded by S when being deployed in all possible contexts.

We address challenge (C1) by exploiting the forward denotational semantics of probabilistic programs based on probability generating function (PGF) representations of (sub-)distributions [42], which benefits crucially from closed-form (i.e., finite) PGF representations of possibly infinite-support distributions. A probabilistic program L hence acts as a transformer \(\llbracket L \rrbracket (\cdot )\) that transforms an input PGF g into an output PGF \(\llbracket L \rrbracket (g)\) (as an instantiation of Kozen’s transformer semantics [43]). In particular, we interpret the specification S as a loop-free probabilistic program I. Such an identification of specifications with programs has two important advantages: (i) we only need a single language to encode programs as well as specifications, and (ii) it enables compositional reasoning in a straightforward manner, in particular, the treatment of nested loops. The problem of checking \(L \sim S\) then boils down to checking whether L and I transform every possible input PGF into the same output PGF:

figure b

As I is loop free, problem (\(\dagger \)) can be reduced to checking the equivalence of two loop-free probabilistic programs (cf. Lemma 2):

figure c

Now challenge (C3) applies since the universal quantification in problem (\(\ddagger \)) requires to determine the equivalence against infinitely many – possibly infinite-support – distributions over program states. We facilitate such an equivalence checking by developing a second-order PGF (SOP) semantics for probabilistic programs, which naturally extends the PGF semantics while allowing to reason about infinitely many PGF transformations simultaneously (see Lemma 3).

Finally, to obtain a decidability result (cf. challenge (C2)), we develop the rectangular discrete probabilistic programming language (\({\texttt {ReDiP}} \)) – a variant of \({\texttt {pGCL}} \) [46] with syntactic restrictions to rectangular guards – featuring various nice properties, e.g., they inherently support i.i.d. sampling, and in particular, they preserve closed-form PGF when acting as PGF transformers. We show that problem (\(\ddagger \)) is decidable for \({\texttt {ReDiP}} \) programs P and I if all the distribution statements therein have rational closed-form PGF (cf. Lemma 4). As a consequence, problem (\(\dagger \)) and thereby problem (\(\star \)) of checking \(L \sim S\) are decidable if L terminates almost-surely on all possible inputs g (cf. Theorem 4).

Demonstration. We have automated our techniques in a tool called \(\textsc {Prodigy}\). As an example, \(\textsc {Prodigy}\) was able to verify, fully automatically in 25 milliseconds, that the implementation of the MCPT path generator with nested loops (in Fig. 1) is indeed equivalent to the loop-free program

figure d

which encodes the specification that, upon termination, \(\mathtt {c}\) is distributed as the sum of \(\mathtt {n}\) i.i.d. geometric random variables. With such an output distribution, multiple queries can be efficiently answered by applying standard PGF operations. For example, the expected value and variance of the runtime are \( E [\mathtt {c}] = n\) and \( Var [\mathtt {c}] = 2n\), respectively (assuming \(\mathtt {c}=0\) initially).

Contributions. The main contributions of this paper are:

  • The probabilistic programming language \({\texttt {ReDiP}} \) and its forward denotational semantics as PGF transformers. We show that loop-free \({\texttt {ReDiP}} \) programs preserve closed-form PGF.

  • The notion of SOP that enables reasoning about infinitely many PGF transformations simultaneously. We show that the problem of determining whether an infinite-state \({\texttt {ReDiP}} \) loop generates – upon termination – exactly a specified distribution is decidable.

  • The software tool \(\textsc {Prodigy}\) which supports automatic invariance checking on the source-code level; it allows reasoning about nested \({\texttt {ReDiP}} \) loops in a compositional manner, and supports efficient queries on various quantities including assertion-violation probabilities, expected values, (high-order) moments, precise tail probabilities, as well as concentration bounds.

Organization. We introduce generating functions in Sect. 2 and define the \({\texttt {ReDiP}} \) language in Sect. 3. Section 4 presents the PGF semantics. Section 5 establishes our decidability result in reasoning about \({\texttt {ReDiP}} \) loops, with case studies in Sect. 6. After discussing related work in Sect. 7, we conclude the paper in Sect. 8. Further details, e.g., proofs and additional examples, can be found in the full version [18].

2 Generating Functions

“A generating function is a clothesline on which we hang up a sequence of numbers for display.” — H. S. Wilf, Generatingfunctionology [60]

The method of generating functions (GF) is a vital tool in many areas of mathematics. This includes in particular enumerative combinatorics [22, 60] and – most relevant for this paper – probability theory [35]. In the latter, the sequences “hanging on the clotheslines” happen to describe probability distributions over the non-negative integers \(\mathbb {N} \), e.g., \({1/2, 1/4, 1/8, \ldots }\) (aka, the geometric distribution).

The most common way to relate an (infinite) sequence of numbers to a generating function relies on the familiar Taylor series expansion: Given a sequence, for example \({1/2, 1/4, 1/8, \ldots }\), find a function \(x \mapsto f(x)\) whose Taylor series around \(x=0\) uses the numbers in the sequence as coefficients. In our example,

$$\begin{aligned} \frac{1}{2 - x} ~{}={}~\frac{1}{2} + \frac{1}{4} x + \frac{1}{8} x^2 + \frac{1}{16} x^3 + \frac{1}{32} x^4 + \ldots , \end{aligned}$$
(1)

for all \(|x| < 2\), hence the “clothesline” used for hanging up \({1/2, 1/4, 1/8, \ldots }\) is the function \(1/(2-x)\). Note that the GF is a – from a purely syntactical point of view – finite object while the sequence it represents is infinite. A key strength of this technique is that many meaningful operations on infinite series can be performed by manipulating an encoding GF (see Table 1 for an overview and examples). In other words, GF provide an interface to perform operations on and extract information from infinite sequences in an effective manner.

2.1 The Ring of Formal Power Series

Towards our goal of encoding distributions over program states (valuations of finitely many integer variables) as generating functions, we need to consider multivariate GF, i.e., GF with more than one variable. Such functions represent multidimensional sequences, or arrays. Since multidimensional Taylor series quickly become unhandy, we will follow a more algebraic approach that is also advocated in [60]: We treat sequences and arrays as elements from an algebraic structure: the ring of Formal Power Series (FPS). Recall that a (commutative) ring \((A,+,\cdot ,0,1)\) consists of a non-empty carrier set A, associative and commutative binary operations “\(+\)” (addition) and “\(\cdot \)” (multiplication) such that multiplication distributes over addition, and neutral elements 0 and 1 w.r.t. addition and multiplication, respectively. Further, every \(a \in A\) has an additive inverse \(-a \in A\). Multiplicative inverses \(a^{-1} = 1/a\) need not always exist. Let \( k \in \mathbb {N} = \{0,1,\ldots \}\) be fixed in the remainder.

Table 1. GF cheat sheet. fg and XY are arbitrary GF and indeterminates, resp.

Definition 1

(The Ring of FPS). A k-dimensional FPS is a k-dim. array \(f :\mathbb {N} ^k \rightarrow \mathbb {R} \). We denote FPS as formal sums as follows: Let \(\mathbf {X} {=} (X_1,\ldots , X_k)\) be an ordered vector of symbols, called indeterminates. The FPS f is written as

$$ f ~{}={}~\sum \nolimits _{\sigma \in \mathbb {N} ^k} f(\sigma ) \mathbf {X}^\sigma $$

where \(\mathbf {X}^\sigma \) is the monomial \(X_1^{\sigma _1} X_2^{\sigma _2} \cdots X_k^{\sigma _k}\). The ring of FPS is denoted \(\mathbb {R} [[{\mathbf {X}}]]\) where the operations are defined as follows: For all \(f,g \in \mathbb {R} [[{\mathbf {X}}]]\) and \(\sigma \in \mathbb {N} ^k\), \((f + g)(\sigma ) = f(\sigma ) + g(\sigma )\), and \((f \cdot g)(\sigma ) = \sum _{\sigma _1+\sigma _2=\sigma }f(\sigma _1)g(\sigma _2)\).

The multiplication \(f \cdot g\) is the usual Cauchy product of power series (aka discrete convolution); it is well defined because for all \(\sigma \in \mathbb {N} ^k\) there are just finitely many \(\sigma _1 + \sigma _2 = \sigma \) in \(\mathbb {N} ^k\). We write fg instead of \(f \cdot g\).

The formal sum notation is standard in the literature and often useful because the arithmetic FPS operations are very similar to how one would do calculations with “real” sums. We stress that the indeterminates \(\mathbf {X}\) are merely labels for the k dimensions of f and do not have any other particular meaning. In the context of this paper, however, it is natural to identify the indeterminates with the program variables (e.g. indeterminate X refers to variable \(\mathtt {x}\), see Sect. 3).

Equation (1) can be interpreted as follows in the ring of FPS: The “sequences” \(2 - 1X + 0X^2 + \ldots \) and \({1/2 + 1/4X + 1/8X^2 + \ldots }\) are (multiplicative) inverse elements to each other in \(\mathbb {R} [[{X}]]\), i.e., their product is 1. More generally, we say that an FPS f is rational if \(f = gh^{-1} = g/h\) where g and h are polynomials, i.e., they have at most finitely many non-zero coefficients; and we call such a representation a rational closed form.

A more extensive introduction to FPS can be found in [18, Appx. D].

2.2 Probability Generating Functions

We are especially interested in GF that describe probability distributions.

Definition 2

(PGF). A k-dimensional FPS g is a probability generating function (PGF) if (i) for all \(\sigma \in \mathbb {N} ^k\) we have \(g(\sigma ) \ge 0\), and (ii) \(\sum _{\sigma \in \mathbb {N} ^k} g(\sigma ) \le 1\).

For example, (1) is the PGF of a \({1/2}\)-geometric distribution. The PGF of other standard distributions are given in Table 3 further below. Note that Definition 2 also includes sub-PGF where the sum in (ii) is strictly less than 1.

3 \({\texttt {ReDiP}} \): A Probabilistic Programming Language

This section presents our Rectangular Discrete Probabilistic Programming Language, or \({\texttt {ReDiP}} \) for short. The word “rectangular” refers to a restriction we impose on the guards of conditionals and loops, see Sect. 3.2. \({\texttt {ReDiP}} \) is a variant of \({\texttt {pGCL}} \) [46] with some extra syntax but also some syntactic restrictions.

3.1 Program States and Variables

Every \({\texttt {ReDiP}} \)-program P operates on a finite set of \(\mathbb {N} \)-valued program variables \( Vars (P) = \{\mathtt {x}_1,\ldots ,\mathtt {x}_k\}\). We do not consider negative or non-integer variables. A program state of P is thus a mapping \(\sigma : Vars (P) \rightarrow \mathbb {N} \). As explained in Sect. 1, the key idea is to represent distributions over such program states as PGF. Consequently, we identify a single program state \(\sigma \) with the monomial \(\mathbf {X}^{\sigma } = X_{1}^{\sigma (\texttt {x}_{1})} \cdots X_{k}^{\sigma (\texttt {x}_{k})}\) where \(X_1,\ldots ,X_k\) are indeterminates representing the program variables \(\mathtt {x}_1,\ldots ,\mathtt {x}_k\). We will stick to this notation: throughout the whole paper, we typeset program variables as \(\mathtt {x}\) and the corresponding FPS indeterminate as X. The initial program state on which a given \({\texttt {ReDiP}} \)-program is supposed to operate must always be stated explicitly.

3.2 Syntax of \({\texttt {ReDiP}} \)

The syntax of \({\texttt {ReDiP}} \) is defined inductively, see the leftmost column of Table 2. Here, \(\mathtt {x}\) and \(\mathtt {y}\) are program variables, \(n \in \mathbb {N} \) is a constant, D is a distribution expression (see Table 3), and \(P_1, P_2\) are \({\texttt {ReDiP}} \)-programs. The general idea of \({\texttt {ReDiP}} \) is to provide a minimal core language to keep the theory simple. Many other common language constructs such as linear arithmetic updates \({\mathtt {x}} \,:=\, {2\mathtt {y} + 3}\) are expressible in this core language. See [18, Appx. A] for a complete specification.

Table 2. Syntax and semantics of \({\texttt {ReDiP}} \). g is the input PGF.
Table 3. A non-exhaustive list of common discrete distributions with rational PGF. The parameters p, n, and \(\lambda \) are a probability, a natural, and a non-negative real number, respectively. T is a reserved placeholder indeterminate.

The word “rectangular” in \({\texttt {ReDiP}} \) emphasizes that our \(\mathtt {if}\)-guards can only identify axis-aligned hyper-rectanglesFootnote 2 in \(\mathbb {N} ^k\), but no more general polyhedra. These rectangular guards \(\mathtt {x} < n\) have the fundamental property that they preserve rational PGF. On the other hand, allowing more general guards like \(\mathtt {x} < \mathtt {y}\) breaks this property (see [21] and our comments in [18, Appx. B].

The most intricate feature of \({\texttt {ReDiP}} \) is the – potentially unbounded – loop \({\texttt {while}}\,(\mathtt {x} < n)\,\{P\}\). A program that does not contain loops is called loop-free.

3.3 The Statement \({\mathtt {x}} ~ {+}{=} ~ {\mathtt {iid}({D}, \, {\mathtt {y}})}\)

The novel \(\mathtt {iid}\) statement is the heart of the loop-free fragment of \({\texttt {ReDiP}} \) – it subsumes both \({\mathtt {x}} \,:=\, {D}\) (“assign a D-distributed sample to \(\mathtt {x}\)”) and the standard assignment \({\mathtt {x}} \,:=\, {\mathtt {y}}\). We include the assign-increment (\({+}{=}\)) version of \(\mathtt {iid}\) in the core fragment of \({\texttt {ReDiP}} \) for technical reasons; the assignment \({\mathtt {x}} \,:=\, {\mathtt {iid}({D}, \, {\mathtt {y}})}\) can be recovered from that as syntactic sugar by simply setting \({\mathtt {x}} \,:=\, {0}\) beforehand.

Intuitively, the meaning of \({\mathtt {x}} ~ {+}{=} ~ {\mathtt {iid}({D}, \, {\mathtt {y}})}\) is as follows. The right-hand side \(\mathtt {iid}({D}, \, {\mathtt {y}})\) can be seen as a function that takes the current value v of variable \(\mathtt {y}\), then draws v i.i.d. samples from distribution D, computes the sum of all these samples and finally increments \(\mathtt {x}\) by the so-obtained value. For example, to perform \({\mathtt {x}} \,:=\, {\mathtt {y}}\), we may just write \( {\mathtt {x}} \,:=\, {\mathtt {iid}({\mathtt {dirac}(1)}, \, {\mathtt {y}})} \) as this will draw \(\mathtt {y}\) times the number 1, then sum up these \(\mathtt {y}\) many 1’s to obtain the result \(\mathtt {y}\) and assign it to \(\mathtt {x}\). Similarly, to assign a random sample from a, say, uniform distribution to \(\mathtt {x}\), we can execute .

But \(\mathtt {iid}\) is not only useful for defining standard operations. In fact, taking sums of i.i.d. samples is common in probability theory. The binomial distribution with parameters \(p \in (0,1)\) and \(n \in \mathbb {N} \), for example, is the defined as the sum of n i.i.d. Bernoulli-p-distributed samples and thus

$$ {\mathtt {x}} \,:=\, {\mathtt {binomial}({p},\,{\mathtt {y}})} \qquad \text {is equivalent to} \qquad {\mathtt {x}} \,:=\, {\mathtt {iid}({\mathtt {bernoulli}({p})}, \, {\mathtt {y}})} $$

for all constants \(p \in (0,1)\). Similarly, the negative (pn)-binomial distribution is the sum of n i.i.d. geometric-p-distributed samples. Overall, \(\mathtt {iid}\) renders the loop-free fragment of \({\texttt {ReDiP}} \) strictly more expressive than it would be if we had included only \({\mathtt {x}} \,:=\, {D}\) and \({\mathtt {x}} \,:=\, {\mathtt {y}}\) instead. As a consequence, since we use loop-free programs as a specification language (see Sect. 5), \(\mathtt {iid}\) enables us to write more expressive program specifications while retaining decidability.

4 Interpreting \({\texttt {ReDiP}} \) with PGF

In this section, we explain the PGF-based semantics of our language which is given in the second column of Table 2. The overall idea is to view a \({\texttt {ReDiP}} \)-program P as a distribution transformer [44, 46]. This means that the input to P is a distribution over initial program states (inputting a deterministic state is just the special case of a Dirac distribution), and the output is a distribution over final program states. With this interpretation, if one regards distributions as generalized program states [33], a probabilistic program is actually deterministic: The same input distribution always yields the same output distribution. The goal of our PGF-based semantics is to construct an interpreter that executes a \({\texttt {ReDiP}} \)-program statement-by-statement in forward direction, transforming one generalized program state into the next. We stress that these generalized program states, or distributions, can be infinite-support in general. For example, the program \({\mathtt {x}} \,:=\, {\mathtt {geometric}({0.5})}\) outputs a geometric distribution – which has infinite support – on \(\mathtt {x}\).

4.1 A Domain for Distribution Transformation

We now define a domain, i.e., an ordered structure, where our program’s in- and output distributions live. Following the general idea of this paper, we encode them as PGF. Let \( Vars \) be a fixed finite set of program variables \(\mathtt {x}_1 ,\ldots , \mathtt {x}_k\) and let \(\mathbf {X} = (X_1,\ldots ,X_k)\) be corresponding formal indeterminates. We let \({\textsf {PGF}} = \{g \in \mathbb {R} [[\mathbf {X}]] \mid g \text { is a PGF}\}\) denote the set of all PGF. Recall that this also includes sub-PGF (Definition 2). Further, we equip \({\textsf {PGF}} \) with the pointwise order, i.e., we let \(g \sqsubseteq f\) iff \(g(\sigma ) \le f(\sigma )\) for all \(\sigma \in \mathbb {N} ^k\). It is clear that \(({\textsf {PGF}}, \sqsubseteq )\) is a partial order that is moreover \(\omega \)-complete, i.e., there exists a least element 0 and all ascending chains \(\varGamma = \{ g_0 \sqsubseteq g_1 \sqsubseteq \ldots \}\) in \({\textsf {PGF}} \) have a least upper bound \(\sup \varGamma \in {\textsf {PGF}} \). The maxima in \(({\textsf {PGF}}, \sqsubseteq )\) are precisely the PGF which are not a sub-PGF.

4.2 From Programs to PGF Transformers

Next we explain how distribution transformation works using (P)GF (cf. Table 1). This is in contrast to the PGF semantics from [42] which operates on infinite sums in a non-constructive fashion.

Definition 3

(The PGF Transformer \(\llbracket P \rrbracket \)). Let P be a \({\texttt {ReDiP}} \)-program. The PGF transformer \(\llbracket P \rrbracket :{\textsf {PGF}} \rightarrow {\textsf {PGF}} \) is defined inductively on the structure of P through the second column in Table 2.

We show in Theorem 2 below that \(\llbracket P \rrbracket \) is well-defined. For now, we go over the statements in the language \({\texttt {ReDiP}} \) and explain the semantics.

Sequential Composition. The semantics of is straightforward and intuitive: First execute \(P_1\) on g and then \(P_2\) on \(\llbracket P_1 \rrbracket (g)\), i.e., . The fact that our semantics transformer moves forwards through the program – as program interpreters usually do – is due to this definition.

Conditional Branching. To translate \({\texttt {if}}\,(\mathtt {x} < n)\,\{P_1\} \, {\texttt {else}} \, \{P_2\}\), we follow the standard procedure which partitions the input distribution according to \(\mathtt {x} < n\) and \(\mathtt {x} \ge n\), processes the two parts independently and finally recombines the results [44]. We realize the partitioning using the (formal) Taylor series expansion. This is feasible because we only allow rectangular guards of the form \(\mathtt {x} < n\), where n is a constant. Thus, for a given input PGF g, the filtered PGF \(g_{\mathtt {x} < n}\) is obtained through expanding g in its first n terms. The \({\texttt {else}}\,\)-part is obviously \(g_{\mathtt {x} \ge n} = g - g_{\mathtt {x} < n}\). We then evaluate \(\llbracket P_1 \rrbracket (g_{\mathtt {x} < n}) + \llbracket P_2 \rrbracket (g_{\mathtt {x} \ge n})\) recursively.

Assigning a Constant. Technically, our semantics realizes an assignment \({\mathtt {x}} \,:=\, {n}\) in two steps: It first sets \(\mathtt {x}\) to 0 and then increments it by n. The former is achieved by substituting X for 1 which corresponds to computing the marginal distribution in all variables except X. For example,

figure e

where the rightmost four lines explain this annotation style [42]. Note that \(0.5 Y^2 + 0.5 Y^3\) is indeed the marginal of the input distribution in Y.

Decrementing a Variable. Since our program variables cannot take negative values, we define \(\mathtt {x}\mathtt {--}\) as \(\max (\mathtt {x} {-} 1, 0)\), i.e., \(\mathtt {x}\) monus (modified minus) 1. Technically, we realize this through \({\texttt {if}}\,(\mathtt {x}<1)\,\{{\texttt {skip}} \} \, {\texttt {else}} \, \{\mathtt {x}\mathtt {--}\}\), i.e., we apply the decrement only to the portion of the input distribution where \(\mathtt {x} \ge 1\). The decrement itself can then be carried out through “multiplication by \(X^{-1}\)”. Note that \(X^{-1}\) is not an element of \(\mathbb {R} [[{X}]]\) because X has no inverse. Instead, the operation \(gX^{-1}\) is an alias for \( shift ^{\leftarrow }(g)\) which shifts g “to the left” in dimension X. To implement the semantics on top of existing computer algebra software, it is very handy to perform the multiplication by \(X^{-1}\) instead. This is justified because for PGF g with \({g}[{X}/{0}] = 0\), \( shift ^{\leftarrow }(g)\) and \(gX^{-1}\) are equal.

The \(\mathtt {iid}\) Statement. The semantics of \({\mathtt {x}} ~ {+}{=} ~ {\mathtt {iid}({D}, \, {\mathtt {y}})}\) relies on the fact that

$$\begin{aligned} T_1 \sim \llbracket D \rrbracket ~ \ldots ~ T_n \sim \llbracket D \rrbracket \qquad \text {implies}\qquad \sum \nolimits _{i=1}^n T_i \sim \llbracket D \rrbracket ^n ~, \end{aligned}$$
(2)

where \(X \sim g\) means that r.v. X is distributed according to PGF g (see, e.g., [55, p. 450]). The \(\mathtt {iid}\) statement generalizes this observation further: If n is not a constant but a random (program) variable \(\mathtt {y}\) with PGF h(Y), then we perform the substitution \({h}[{Y}/{\llbracket D \rrbracket }]\) (i.e., replace Y by \(\llbracket D \rrbracket \) in h) to obtain the PGF of the sum of \(\mathtt {y}\)-many i.i.d. samples from D. We slightly modify this substitution to \({g}[{Y}/{Y{\llbracket D \rrbracket }[{T}/{X}]}]\) in order to (i) not alter \(\mathtt {y}\), and (ii) account for the increment to \(\mathtt {x}\). For example,

The \(\mathtt {while}\)-Loop. The fixed point semantics of the while loop is standard [42, 44] and reflects the intuitive unrolling rule, namely that \({\texttt {while}}\,(\varphi )\,\{P\}\) is equivalent to . Indeed, the fixed point formula in Table 2 can be derived using the semantics of \(\mathtt {if}\) discussed above. We revisit this fixed point characterization in Sect. 5.1.

Properties of \(\llbracket P \rrbracket \). Our PGF semantics has the property that all programs – except while loops – are able to operate on the input PGF in (rational) closed form, i.e., they never have to expand the input as an infinite series (which is of course impossible in practice). More formally:

Theorem 1

(Closed-Form Preservation). Let P be a loop-free \({\texttt {ReDiP}} \) program, and let \(g = h/f \in {\textsf {PGF}} \) be in rational closed form. Then we can compute a rational closed form of \(\llbracket P \rrbracket (g) \in {\textsf {PGF}} \) by applying the transformations in Table 2.

The proof is by induction over the structure of P noticing that all the necessary operations (substitution, differentiation, etc.) preserve rational closed forms, see [18, Appx. D]. A slight extension of our syntax, e.g., admitting non-rectangular guards, renders that closed forms are not preserved, see [18, Appx. B]. Moreover, \(\llbracket P \rrbracket \) has the following healthiness [46] properties:

Theorem 2

(Properties of \(\llbracket P \rrbracket \)). The PGF transformer \(\llbracket P \rrbracket \) is

  • a well-defined function \({\textsf {PGF}} \rightarrow {\textsf {PGF}} \) ,

  • continuous, i.e., \(\llbracket P \rrbracket (\sup \varGamma ) = \sup \llbracket P \rrbracket (\varGamma )\) for all chains \(\varGamma \subseteq {\textsf {PGF}} \) ,

  • linear, i.e., \(\llbracket P \rrbracket (\sum _{\sigma \in \mathbb {N} ^k} g(\sigma ) \mathbf {X}^\sigma ) = \sum _{\sigma \in \mathbb {N} ^k} g(\sigma ) \llbracket P \rrbracket (\mathbf {X}^\sigma )\) for all \(g \in {\textsf {PGF}} \) .

4.3 Probabilistic Termination

Due to the presence of possibly unbounded \({\texttt {while}} \)-loops, a \({\texttt {ReDiP}} \)-program does not necessarily halt, or may do so only with a certain probability. Our semantics naturally captures the termination probability.

Definition 4

(AST). A \({\texttt {ReDiP}} \)-program P is called almost-surely terminating (AST) for PGF g if \({\llbracket P \rrbracket (g)}[{\mathbf {X}}/{\mathbf {1}}] = {g}[{\mathbf {X}}/{\mathbf {1}}]\), i.e., if it does not leak probability mass. P is called universally AST (UAST) if it is AST for all \(g \in {\textsf {PGF}} \).

Note that all loop-free \({\texttt {ReDiP}} \)-programs are UAST. In this paper, (U)AST only plays a minor role. Nonetheless, the proof rule below yields a stronger result (cf. Lemma 2) if the program is UAST. There exist various of techniques and tools for proving (U)AST [17, 47, 50].

5 Reasoning About Loops

We now focus on loopy programs \(L = {\texttt {while}}\,(\varphi )\,\{P\}\). Recall from Table 2 that \(\llbracket L \rrbracket :{\textsf {PGF}} \rightarrow {\textsf {PGF}} \) is defined as the least fixed point of a higher order functional

$$ \varPsi _{{\varphi }, {P}} :({\textsf {PGF}} \rightarrow {\textsf {PGF}}) ~{}\rightarrow {}~({\textsf {PGF}} \rightarrow {\textsf {PGF}}) . $$

Following [42], we show that \(\varPsi _{{\varphi }, {P}}\) is sufficiently well-behaved to allow reasoning about loops by fixed point induction.

5.1 Fixed Point Induction

To apply fixed point induction, we need to lift our domain \({\textsf {PGF}} \) from Sect. 4.1 by one order to \(({\textsf {PGF}} \rightarrow {\textsf {PGF}})\), the domain of PGF transformers. This is because the functional \(\varPsi _{{\varphi }, {P}}\) operates on PGF transformers and can thus be seen as a second-order function (this point of view regards PGF as first-order objects). Recall that in contrast to this, the function \(\llbracket P \rrbracket \) is first-order – it is just a PGF transformer. The order on \(({\textsf {PGF}} \rightarrow {\textsf {PGF}})\) is obtained by lifting the order \(\sqsubseteq \) on \({\textsf {PGF}} \) pointwise (we denote it with the same symbol \(\sqsubseteq \)). This implies that \(({\textsf {PGF}} \rightarrow {\textsf {PGF}})\) is also an \(\omega \)-complete partial order. We can then show that \(\varPsi _{{\varphi }, {P}}\) (see Table 2) is a continuous function. With these properties, we obtain the following induction rule for upper bounds on \(\llbracket L \rrbracket \), cf. [42, Theorem 6]:

Lemma 1

(Fixed Point Induction for Loops). Let \(L = {\texttt {while}}\,(\varphi )\,\{P\}\) be a \({\texttt {ReDiP}} \)-loop. Further, let \(\psi :{\textsf {PGF}} \rightarrow {\textsf {PGF}} \) be a PGF transformer. Then

$$\begin{aligned} \varPsi _{{\varphi }, {P}}(\psi ) ~{}\sqsubseteq {}~\psi \qquad \text {implies}\qquad \llbracket L \rrbracket ~{}\sqsubseteq {}~\psi ~. \end{aligned}$$

The goal of the rest of the paper is to apply the rule from Lemma 1 in practice. To this end, we must somehow specify an invariant such as \(\psi \) by finite means. Since \(\psi \) is of type \(({\textsf {PGF}} \rightarrow {\textsf {PGF}})\), we consider \(\psi \) as a program I – more specifically, a \({\texttt {ReDiP}} \)-program – and identify \(\psi = \llbracket I \rrbracket \). Further, by definition

figure f

and thus the term \(\varPsi _{{\varphi }, {P}}(\llbracket I \rrbracket )\) is also a PGF-transformer expressible as a \({\texttt {ReDiP}} \)-program. These observations and Lemma 1 imply the following:

Lemma 2

Let \(L = {\texttt {while}}\,(\varphi )\,\{P\}\) and I be \({\texttt {ReDiP}} \)-programs. Then

(3)

Further, if L is UAST (Definition 4), then

(4)

Lemma 2 effectively reduces checking whether \(\psi \) given as a \({\texttt {ReDiP}} \)-program I is an invariant of L to checking equivalence of and I provided L is UAST. If I is loop-free, then the latter two programs are both loop-free and we are left with the task of proving whether they yield the same output distribution for all inputs. We now present a solution to this problem.

5.2 Deciding Equivalence of Loop-free Programs

Even in the absence of loops, deciding if two given \({\texttt {ReDiP}} \)-programs are equivalent is non-trivial as it requires reasoning about infinitely many – possibly infinite-support – distributions on program variables. In this section, we first show that \(\llbracket P_1 \rrbracket = \llbracket P_2 \rrbracket \) is decidable for loop-free \({\texttt {ReDiP}} \) programs \(P_1\) and \(P_2\), and then use this result together with Lemma 2 to obtain the main result of this paper.

SOP: Second-Order PGF. Our goal is to check if \(\llbracket P_1 \rrbracket (g) = \llbracket P_2 \rrbracket (g)\) for all \(g \in {\textsf {PGF}} \). To tackle this, we encode whole sets of PGF into a single object – an FPS we call second-order PGF (SOP). To define SOP, we need a slightly more flexible view on FPS. Recall from Definition 1 that a k-dim. FPS is an array \(f :\mathbb {N} ^k \rightarrow \mathbb {R} \). Such an f can be viewed equivalently as an l-dim. array with \((k{-}l)\)-dim. arrays as entries. In the formal sum notation, this is reflected by partitioning \(\mathbf {X} = (\mathbf {Y}, \mathbf {Z})\) and viewing f as an FPS in \(\mathbf {Y}\) with coefficients that are FPS in the other indeterminates \(\mathbf {Z}\). For example,

$$\begin{aligned} (1-Y)^{-1}(1-Z)^{-1}&~{}={}~1 + Y + Z + Y^2 + YZ + Z^2 + \ldots \\&~{}={}~(1-Z)^{-1} + (1-Z)^{-1}Y + (1-Z)^{-1}Y^2 + \ldots \end{aligned}$$

where in the lower line the coefficients \((1{-}Z)^{-1}\) are considered elements in \(\mathbb {R} [[{Z}]]\).

Definition 5

(SOP). Let \(\mathbf {U}\) and \(\mathbf {X}\) be disjoint sets of indeterminates. A formal power series \(f \in \mathbb {R} [[\mathbf {U},\mathbf {X}]]\) is a second-order PGF (SOP) if

$$ f = \sum \nolimits _{\tau \in \mathbb {N} ^{\vert {\mathbf {U}}\vert }} f(\tau ) \mathbf {U}^\tau \quad (\text {with } f(\tau ) \in \mathbb {R} [[{\mathbf {X}}]]) \qquad \text {implies}\qquad \forall \tau :f(\tau ) \in {\textsf {PGF}}. $$

That is, an SOP is simply an FPS whose coefficients are PGF – instead of generating a sequence of probabilities as PGF do, it generates a sequence of distributions. An (important) example SOP is

$$\begin{aligned} f_{ dirac } ~{}={}~(1-XU)^{-1} ~{}={}~1 + XU + X^2U^2 + \ldots ~\in \mathbb {R} [[{U,X}]] , \end{aligned}$$
(5)

i.e., for all \(i\ge 0\), \(f_{ dirac }(i) = X^i = \llbracket \mathtt {dirac}(i) \rrbracket \). As a second example consider \(f_{ binom } = {f_{ dirac }}[{X}/{0.5 + 0.5X}]\); it is clear that \(f_{ binom }(i) = (0.5 + 0.5X)^i = \llbracket \mathtt {binomial}({0.5},\,{i}) \rrbracket \) for all \(i \ge 0\). Note that if \(\mathbf {U} = \emptyset \), then SOP and PGF coincide. For fixed \(\mathbf {X}\) and \(\mathbf {U}\), we denote the set of all second-order PGF with \(\mathsf {SOP}\).

SOP Semantics of \({\texttt {ReDiP}} \). The appeal of SOP is that, syntactically, they are still formal power series, and some can be represented in closed form just like PGF. Moreover, we can readily extend our PGF transformer \(\llbracket P \rrbracket \) to an SOP transformer \(\llbracket P \rrbracket :\mathsf {SOP}\rightarrow \mathsf {SOP}\). A key insight of this paper is that – without any changes to the rules in Table 2 – applying \(\llbracket P \rrbracket \) to an SOP is the same as applying \(\llbracket P \rrbracket \) simultaneously to all the PGF it subsumes:

Theorem 3

Let P be a \({\texttt {ReDiP}} \)-program. The transformer \(\llbracket P \rrbracket :\mathsf {SOP}\rightarrow \mathsf {SOP}\) is well-defined. Further, if \(f = \sum _{\tau \in \mathbb {N} ^{\vert {\mathbf {U}}\vert }} f(\tau ) \mathbf {U}^\tau \) is an SOP, then

$$ \llbracket P \rrbracket (f) ~{}={}~\sum \nolimits _{\tau \in \mathbb {N} ^{\vert {\mathbf {U}}\vert }} \llbracket P \rrbracket (f(\tau )) \mathbf {U}^\tau ~. $$

An SOP Transformation for Proving Equivalence. We now show how to exploit Theorem 3 for equivalence checking. Let \(P_1\) and \(P_2\) be (loop-free) \({\texttt {ReDiP}} \)-programs; we are interested in proving whether \(\llbracket P_1 \rrbracket = \llbracket P_2 \rrbracket \). By linearity it holds that \(\llbracket P_1 \rrbracket = \llbracket P_2 \rrbracket \) iff \(\llbracket P_1 \rrbracket (\mathbf {X}^\sigma ) = \llbracket P_2 \rrbracket (\mathbf {X}^\sigma )\) for all \(\sigma \in \mathbb {N} ^k\), i.e., to check equivalence it suffices to consider all (infinitely many) point-mass PGF as inputs.

Lemma 3

(SOP-Characterisation of Equivalence). Let \(P_1\) and \(P_2\) be \({\texttt {ReDiP}} \)-programs with \( Vars (P_i) \subseteq \{\mathtt {x_1},\ldots ,\mathtt {x_k}\}\) for \(i \in \{1,2\}\). Further, consider a vector \(\mathbf {U} = (U_1,\ldots ,U_k)\) of meta indeterminates, and let \(g_{\mathbf {X}}\) be the SOP

$$ g_{\mathbf {X}} ~{}={}~(1 - X_1 U_1)^{-1} (1 - X_2 U_2)^{-1} \cdots (1 - X_k U_k)^{-1} ~{}\in {}~\mathbb {R} [[{\mathbf {U},\mathbf {X}}]] ~. $$

Then \(\llbracket P_1 \rrbracket = \llbracket P_2 \rrbracket \) if and only if \(\llbracket P_1 \rrbracket (g_{\mathbf {X}}) = \llbracket P_2 \rrbracket (g_{\mathbf {X}})\).

The proof of Lemma 3 (see [18, Appx. F.5]) relies on Theorem 3 and the fact that the rational SOP \(g_{\mathbf {X}}\) generates all (multivariate) point-mass PGF; in fact it holds that \(g_{\mathbf {X}} = \sum _{\sigma \in \mathbb {N} ^k} \mathbf {X}^\sigma \mathbf {U}^\sigma \), i.e., \(g_{\mathbf {X}}\) generalizes \(f_{ dirac }\) from (5). It follows:

Lemma 4

\(\llbracket P_1 \rrbracket = \llbracket P_2 \rrbracket \) is decidable for loop-free \({\texttt {ReDiP}} \)-programs \(P_1, P_2\).

Our main theorem follows immediately from Lemmas 2 and 4:

Theorem 4

Let \(L = {\texttt {while}}\,(\varphi )\,\{P\}\) be UAST with loop-free body P and I be a loop-free \({\texttt {ReDiP}} \)-program. It is decidable whether \(\llbracket L \rrbracket = \llbracket I \rrbracket \).

Example 2

In Fig. 2 we prove that the two UAST programs L and I

figure g
Fig. 2.
figure 2

Program equivalence follows from the equality of the resulting SOP (Lemma 3).

are equivalent (i.e., \(\llbracket L \rrbracket = \llbracket I \rrbracket \)) by showing that as suggested by Lemma 2. The latter is achieved as in Lemma 3: We run both programs on the input SOP \(g_{N,C} = (1 - NU)^{-1} (1 - CV)^{-1}\), where UV are meta indeterminates corresponding to N and C, respectively, and check if the results are equal. Note that I is the loop-free specification from Example 1; thus by transitivity, the loop L is equivalent to the loop in Fig. 1.    \(\lhd \)

6 Case Studies

We have implemented our techniques in Python as a prototype called ProdigyFootnote 3: PRObability DIstributions via GeneratingfunctionologY. By interfacing with different computer algebra systems (CAS), e.g., Sympy [49] and GiNaC [10, 57] – as backends for symbolic computation of PGF and SOP semantics – Prodigy decides whether a given probabilistic loop agrees with an (invariant) specification encoded as a loop-free \({\texttt {ReDiP}} \) program. Furthermore, it supports efficient queries on various quantities associated with the output distribution.

In what follows, we demonstrate in particular the applicability of our techniques to programs featuring stochastic dependency, parametrization, and nested loops. The examples are all presented in the same way: the iterative program on the left side and its corresponding specification on the right. The presented programs are all UAST, given the parameters are instantiated from a suitable value domain.Footnote 4 For each example, we report the time for performing the equivalence check on a 2,4 GHz Intel i5 Quad-Core processor with 16GB RAM running macOS Monterey 12.0.1. Additional examples can be found in [18, Appx. E].

Fig. 3.
figure 3

Generating complementary binomial distributions (for \(\mathtt {n}, \mathtt {m}\)) by coin flips. \({\mathtt {binomial}({1/2},\,{\mathtt {c}})}\) is an alias for \({\mathtt {iid}({\mathtt {bernoulli}({1/2})}, \, {\mathtt {c}})}\).

Fig. 4.
figure 4

A program modeling two dueling cowboys with parametric hit probabilities.

Example 3

(Complementary Binomial Distributions). We show that the program in Fig. 3 generates a joint distribution on \(\mathtt {n}, \mathtt {m}\) such that both \(\mathtt {n}\) and \(\mathtt {m}\) are binomially distributed with support \(\mathtt {c}\) and are complementary in the sense that \(\mathtt {n} + \mathtt {m} = \mathtt {c}\) holds certainly (if \(\mathtt {n}=\mathtt {m=0}\) initially, otherwise the variables are incremented by the corresponding amounts). Prodigy automatically checks that the loop agrees with the specification in 18.3 ms. The resulting distribution can then be analyzed for any given input PGF g by computing \(\llbracket I \rrbracket (g)\), where I is the loop-free program. For example, for input \(g = C^{10}\), the distribution as computed by Prodigy has the factorized closed form \((\frac{M+N}{2})^{10}\). The CAS backends exploit such factorized forms to perform algebraic manipulations more efficiently compared to fully expanded forms. For instance, we can evaluate the queries \( E [m^3+ 2mn + n^2] = 235\), or \(Pr(m >7 \wedge n < 3) = 7/128\), almost instantly.

   \(\lhd \)

Example 4

(Dueling Cowboys  [46]). The program in Fig. 4 models a duel of two cowboys with parametric hit probabilities \(\mathtt {a}\) and \(\mathtt {b}\). Variable \(\mathtt {t}\) indicates the cowboy who is currently taking his shot, and \(\mathtt {c}\) monitors the state of the duel (\(\mathtt {c} = 1\): duel is still running, \(\mathtt {c} = 0\): duel is over). \(\textsc {Prodigy}\) automatically verifies the specification in 11.97 ms. We defer related problems – e.g., synthesizing parameter values to meet a parameter-free specification – to future work.    \(\lhd \)

Fig. 5.
figure 5

Nested loops with invariants for the inner and outer loop.

Example 5

(Nested Loops). The inner loop of the program in Fig. 5 modifies \(\mathtt {x}\) which influences the termination behavior of the outer loop. Intuitively, the program models a random walk on \(\mathbb {N} \): In every step, the value of the current position \(\mathtt {x}\) changes by some random \(\delta \in \{-1,0,1,2,\ldots \}\) such that \(\delta +1 \) is geometrically distributed. The example demonstrates how our technique enables compositional reasoning. We first provide a loop-free specification for the inner loop, prove its correctness, and then simply replace the inner loop by its specification, yielding a program without nested loops. This feature is a key benefit of reusing the loop-free fragment of \({\texttt {ReDiP}} \) as a specification language. Moreover, existing techniques that cannot handle nested loops can profit from it; in fact, we can prove the overall program to be UAST using the rule of [47]. Interestingly, the outer loop has infinite expected runtime (for any input distribution where the probability that \(\mathtt {x} > 0\) is positive). We can prove this by querying the expected value of the program variable \(\mathtt {c}\) in the resulting output distribution. The automatically computed result is \(\infty \), which indeed proves that the expected runtime of this program is not finite. This example furthermore shows that our technique can be generalized beyond rational functions since the PGF of the \(\mathtt {catalan}({p})\) distribution is \((1 - \sqrt{1 - 4 p (1{-}p) T} ) \,/\, 2p\), i.e., algebraic but not rational. We leave a formal generalization of the decidability result from Theorem 4 to algebraic functions for future work. Prodigy verifies this example in 29.17ms.    \(\lhd \)

Scalability Issue. It is not difficult to construct programs where Prodigy poorly scales: its performance depends highly on the number of consecutive probabilistic branches and the size of the constant n in guards (requiring n-th order PGF derivation, cf. Table 2).

7 Related Work

This section surveys research efforts that are highly related to our approach in terms of semantics, inference, and equivalence checking of probabilistic programs.

Forward Semantics of Probabilistic Programs. Kozen established in his seminal work [43] a generic way of giving forward, denotational semantics to probabilistic programs as distribution transformers. Klinkenberg et al. [42] instantiated Kozen’s semantics as PGF transformers. We refine the PGF semantics substantially such that it enjoys the following crucial properties: (i) our PGF transformers (when restricted to loop-free \({\texttt {ReDiP}} \) programs) preserve closed-form PGF and thus are effectively constructable. In contrast, the existing PGF semantics in [42] operates on infinite sums in a non-constructive fashion; (ii) our PGF semantics naturally extends to SOP, which serves as the key to reason about the exact behavior of unbounded loops (under possibly uncountably many inputs) in a fully automatic manner. The PGF semantics in [42], however, supports only (over-)approximations of looping behaviors and can hardly be automated; and (iii) our PGF semantics is capable of interpreting program constructs like i.i.d. sampling that is of particular interest in practice.

Backward Semantics of Probabilistic Programs. Many verification systems for probabilistic programs make use of backward, denotational semantics – most pertinently, the weakest preexpectation (WP) calculi [38, 46] as a quantitative extension of Dijkstra’s weakest preconditions [19]. The WP of a probabilistic program C w.r.t. a postexpectation g, denoted by \(\textsf {wp}\llbracket C \rrbracket (g)(\cdot )\), maps every initial program state \(\sigma \) to the expected value of g evaluated in final states reached after executing C on \(\sigma \). In contrast to Dijkstra’s predicate transformer semantics which admits also strongest postconditions, the counterpart of “strongest postexpectations” does unfortunately not exist [36, Chap. 7], thereby not amenable to forward reasoning. We remark, in particular, that checking program equivalence via WP is difficult, if not impossible, since it amounts to reasoning about uncountably many postexpectations g. We refer interested readers to [5, Chaps. 1–4] for more recent advancements in formal semantics of probabilistic programs.

Probabilistic Inference. There are a handful of probabilistic systems that employ an alternative forward semantics based on probability density function (PDF) representations of distributions, e.g., (\(\lambda \))PSI [24, 25], AQUA [32], Hakaru [14, 52], and the density compiler in [11, 12]. These systems are dedicated to probabilistic inference for programs encoding continuous distributions (or joint discrete-continuous distributions). Reasoning about the underlying PDF representations, however, amounts to resolving complex integral expressions in order to answer inference queries, thus confining these techniques either to (semi-)numerical methods [11, 12, 14, 32, 52] or exact methods yet limited to bounded looping behaviors [24, 25]. Apart from these inference systems, a recently developed language called Dice [31] featuring exact inference for discrete probabilistic programs is also confined to statically bounded loops. The tool Mora [7, 8] supports exact inference for various types of Bayesian networks, but relies on a restricted form of intermediate representation known as prob-solvable loops, whose behaviors can be expressed by a system of C-finite recurrences admitting closed-form solutions.

Equivalence of Probabilistic Programs. Murawski and Ouaknine [51] showed an Exptime decidability result for checking the equivalence of probabilistic programs over finite data types by recasting the problem in terms of probabilistic finite automata [23, 41, 56]. Their techniques have been automated in the equivalence checker APEX [45]. Barthe et al. [4] proved a 2-Exptime decidability result for checking equivalence of straight-line probabilistic programs (with deterministic inputs and no loops nor recursion) interpreted over all possible extensions of a finite field. Barthe et al. [3] developed a relational Hoare logic for probabilistic programs, which has been extensively used for, amongst others, proving program equivalence with applications in provable security and side-channel analysis.

The decidability result established in this paper is orthogonal to the aforementioned results: (i) our decidability for checking \(L \sim S\) applies to discrete probabilistic programs L with unbounded looping behaviors over a possibly infinite state space; the specification S – though, admitting no loops – encodes a possibly infinite-support distribution; yet as a compromise, (ii) our decidability result is confined to \({\texttt {ReDiP}} \) programs that necessarily terminate almost-surely on all inputs, and involve only distributions with rational closed-form PGF.

8 Conclusion and Future Work

We showed the decidability of – and have presented a fully-automated technique to verifying – whether a (possibly unbounded) probabilistic loop is equivalent to a loop-free specification program. Future directions include determining the complexity of our decision problem; amending the method to continuous distributions using, e.g., characteristic functions; extending the notion of probabilistic equivalence to probabilistic refinements; exploring PGF-based counterexample-guided synthesis of quantitative loop invariants (see [18, Appx. F.6] for generating counterexamples); and tackling Bayesian inference.