Automated Termination Analysis of Polynomial Probabilistic Programs

The termination behavior of probabilistic programs depends on the outcomes of random assignments. Almost sure termination (AST) is concerned with the question whether a program terminates with probability one on all possible inputs. Positive almost sure termination (PAST) focuses on termination in a finite expected number of steps. This paper presents a fully automated approach to the termination analysis of probabilistic while-programs whose guards and expressions are polynomial expressions. As proving (positive) AST is undecidable in general, existing proof rules typically provide sufficient conditions. These conditions mostly involve constraints on supermartingales. We consider four proof rules from the literature and extend these with generalizations of existing proof rules for (P)AST. We automate the resulting set of proof rules by effectively computing asymptotic bounds on polynomials over the program variables. These bounds are used to decide the sufficient conditions – including the constraints on supermartingales – of a proof rule. Our software tool Amber can thus check AST, PAST, as well as their negations for a large class of polynomial probabilistic programs, while carrying out the termination reasoning fully with polynomial witnesses. Experimental results show the merits of our generalized proof rules and demonstrate that Amber can handle probabilistic programs that are out of reach for other state-of-the-art tools.


Introduction
Classical program termination. Termination is a key property in program analysis [16]. The question whether a program terminates on all possible inputs -the universal halting problem -is undecidable. Proof rules based on ranking functions have been developed that impose sufficient conditions implying (non-)termination. Automated termination checking has given rise to powerful software tools such as AProVE [21] and NaTT [44] (using term rewriting), and UltimateAutomizer [26] (using automata theory). These tools have shown to be able to determine the termination of several intricate programs. The industrial tool Terminator [15] has taken termination proving into practice and is able to prove termination -or even more general liveness properties -of e.g., device driver software. Rather than seeking a single ranking function, it takes a disjunctive termination argument using sets of ranking functions. Other results include termination proving methods for specific program classes such as linear and polynomial programs, see, e.g., [9,24].
Termination of probabilistic program. Probabilistic programs extend sequential programs with the ability to draw samples from probability distributions. They are used e.g. for, encoding randomized algorithms, planning in AI, security mechanisms, and in cognitive science. In this paper, we consider probabilistic while-programs with discrete probabilistic choices, in the vein of the seminal works [34] and [37]. Termination of probabilistic programs differs from the classical halting problem in several respects, e.g., probabilistic programs may exhibit diverging runs that have probability mass zero in total. Such programs do not always terminate, but terminate with probability one -they almost surely terminate. An example of such a program is given in Figure 1a where variable x is incremented by 1 with probability 1 /2, and otherwise decremented with this amount. This program encodes a one-dimensional (1D) left-bounded random walk starting at position 10. Another important difference to classical termination is that the expected number of program steps until termination may be infinite, even if the program almost surely terminates. Thus, almost sure termination (AST) does not imply that the expected number of steps until termination is finite. Programs that have a finite expected runtime are referred to as positively almost surely terminating (PAST). Figure 1c is a sample program that is PAST. While PAST implies AST, the converse does not hold, as evidenced by Figure 1a: the program of Figure 1a terminates with probability one but needs infinitely many steps on average to reach x=0, hence is not PAST. (The terminology AST and PAST was coined in [8] and has its roots in the theory of Markov processes.) Proof rules for AST and PAST. Proving termination of probabilistic programs is hard: AST for a single input is as hard as the universal halting problem, whereas PAST is even harder [30]. Termination analysis of probabilistic programs is currently attracting quite some attention. It is not just of theoretical interest. For instance, a popular way to analyze probabilistic programs in machine learning is by using some advanced form of simulation. If, however, a program is not PAST, the simulation may take forever. In addition, the use of probabilistic programs in safety-critical environments [2,7,20] necessitates providing formal guarantees on termination. Different techniques are considered for probabilistic program termination ranging from probabilistic term rewriting [3], sized types [17], and Büchi automata theory [14], to weakest pre-condition calculi for checking PAST [31]. A large body of works considers proof rules that provide sufficient conditions for proving AST, PAST, or their negations. These rules are based on martingale theory, in particular supermartingales. They are stochastic processes that can be (phrased in a simplified manner) viewed as the probabilistic analog of ranking functions: the value of a random variable represents the "value" of the function at the beginning of a loop iteration. Successive random variables model the evolution of the program loop. Being a supermartingale means that the expected value of the random variables at the end of a loop does not exceed its value at the start of the loop. Constraints on supermartingales form the essential part of proof rules. For example, the AST proof rule in [38] requires the existence of a supermartingale whose value decreases at least with a certain amount by at least a certain probability on each loop iteration. Intuitively speaking, the closer the supermartingales comes to zero -indicating termination -the more probable it is that it increases more. The AST proof rule in [38] is applicable to prove AST for the program in Figure 1a; yet, it cannot be used to prove PAST of Figures 1c-1d. On the other hand, the PAST proof rule in [10,19] requires that the expected decrease of the supermartingale on each loop iteration is at least some positive constant and on loop termination needs to be at most zero -very similar to the usual constraint on ranking functions. While [10,19] can be used to prove the program in Figure 1c to be PAST, these works cannot be used for Figure 1a. They cannot be used for proving Figure 1d to be PAST either. The rule for showing non-AST [13] requires the supermartingale to be repulsing. This intuitively means that the supermartingale decreases on average with at least ε and is positive on termination. Figuratively speaking, it repulses terminating states. It can be used to prove the program in Figure 1b to be not AST. In summary, while existing works for proving AST, PAST, and their negations are generic in nature, they are also restricted for classes of probabilistic programs. In this paper, we propose relaxed versions of existing proof rules for probabilistic termination that turn out to treat quite a number of programs that could not be proven otherwise (Section 4). In particular, (non-)termination of all four programs of Figure 1 can be proven using our proof rules.
Automated termination checking of AST and PAST. Whereas there is a large body of techniques and proof rules, software tool support to automate checking termination of probabilistic programs is still in its infancy. This paper presents novel algorithms to automate various proof rules for probabilistic programs: the three aforementioned proof rules [10,19,38,13] and a variant of the non-AST proof rule to prove non-PAST [13] 3 . We also present relaxed versions of each of the proof rules, going beyond the stateof-the-art in the termination analysis of probabilistic programs. We focus on so-called Prob-solvable loops, extending [4]. Namely, we define Prob-solvable loops as probabilistic while-programs whose guards compare two polynomials (over program variables) and whose body is a sequence of random assignments with polynomials as right-hand side such that a variable x, say, only depends on variables preceding x in the loop body. While restrictive, Prob-solvable loops cover a vast set of interesting probabilistic programs (see Remark 1). An essential property of our programs is that the statistical moments of program variables can be obtained as closed-form formulas [4]. The key of our algorithmic approach is a procedure for computing asymptotic lower, upper and absolute bounds on polynomial expressions over program variables in our programs (Section 5). This enables a novel method for automating probabilistic termination and non-termination proof rules based on (super)martingales, going beyond the state-of-the-art in probabilistic termination. Our relaxed proof rules allow us to fully automate (P)AST analysis by using only polynomial witnesses. Our experiments provide practical evidence that polynomial witnesses within Prob-solvable loops are sufficient to certify most examples from the literature and even beyond (Section 6).
Our termination tool AMBER. We have implemented our algorithmic approach in the publicly available tool AMBER. It exploits asymptotic bounds over polynomial martingales and uses the tool MORA [4] for computing the first-order moments of program variables and the computer algebra system package diofant. It employs over-and underapproximations realized by a simple static analysis. AMBER establishes probabilistic termination in a fully automated manner and has the following unique characteristics: -it includes the first implementation of the AST proof rule of [38], and it is the first tool capable of certifying AST for programs that are not PAST and cannot be split into PAST subprograms, and it is the first tool that brings the various proof rules under a single umbrella: AST, PAST, non-AST and non-PAST. An experimental evaluation on various benchmarks shows that: (1) AMBER is superior to existing tools for automating PAST [42] and AST [10], (2) the relaxed proof rules enable proving substantially more programs, and (3) AMBER is able to automate the termination checking of intricate probabilistic programs (within the class of programs considered) that could not be automatically handled so far (Section 6). For example, AMBER solves 23 termination benchmarks that no other automated approach could so far handle.
Main contributions. To summarize, the main contributions of this paper are: 1. Relaxed proof rules for (non-)termination, enabling treating a wider class of programs (Section 4). 2. Efficient algorithms to compute asymptotic bounds on polynomial expressions of program variables (Section 5). 3. Automation: a realisation of our algorithms in the tool AMBER (Section 6). 4. Experiments showing the superiority of AMBER over existing tools for proving (P)AST (Section 6).

Preliminaries
We denote by N and R the set of natural and real numbers, respectively. Further, let R denote R∪{+∞,−∞}, R + 0 the non-negative reals and R[x 1 ,...,x m ] the polynomial ring in x 1 ,...,x m over R. We write x := E (1) for the probabilistic update of program variable x, denoting the execution of x := E (j) with probability p j , for j =1,...,m−1, and the execution of x := E (m) with probability 1− m−1 j=1 p j , where m ∈ N. We write indices of expressions over program variables in round brackets and use E i for the stochastic process induced by expression E. This section introduces our programming language extending Prob-solvable loops [4] and defines the probability space introduced by such programs. Let E denote the expectation operator with respect to a probability space. We assume the reader to be familiar with probability theory [33].

Programming Model: Prob-Solvable Loops
Prob-solvable loops [4] are syntactically restricted probabilistic programs with polynomial expressions over program variables. The statistical higher-order moments of program variables, like expectation and variance of such loops, can always be computed as functions of the loop counter. In this paper, we extend Prob-solvable loops with polynomial loop guards in order to study their termination behavior, as follows.
If L is clear from the context, the subscript L is omitted from I L , G L , and U L . Figure 1 gives four example Prob-solvable loops.
Remark 1 (Prob-solvable expressiveness). The enforced order of assignments in the loop body of Prob-solvable loops seems restrictive. However, many non-trivial probabilistic programs can be naturally modeled as succinct Prob-solvable loops. These include complex stochastic processes such as 2D random walks and dynamic Bayesian networks [5]. Almost all existing benchmarks on automated probabilistic termination analysis fall within the scope of Prob-solvable loops (cf. Section 6).
In the sequel, we consider an arbitrary Prob-solvable loop L and provide all definitions relative to L. The semantics of L is defined next, by associating L with a probability space.

Canonical Probability Space
A probabilistic program, and thus a Prob-solvable loop, can be semantically described as a probabilistic transition system [10] or as a probabilistic control flow graph [13], which in turn induce an infinite Markov chain (MC) 4 . An MC is associated with a sequence space [33], a special probability space. In the sequel, we associate L with the sequence space of its corresponding MC, similarly as in [25]. Note that any infinite sequence of states is a run. Infeasible runs will however be assigned measure 0. We write s B to denote that the logical formula B holds in state s. Intuitively, P(Cyl(π)) is the probability that prefix π is the sequence of the first |π| program states when executing L. Moreover, the σ-algebra F i intuitively captures the information about the program run after the loop body U has been executed i times. We note that the effect of the loop body U is considered as atomic.
In order to formalize termination properties of a Prob-solvable loop L, we define the looping time of L to be a random variable in L's loop space.
Definition 4 (Looping Time of L). The looping time of L is the random variable T ¬G : Intuitively, the looping time T ¬G maps a program run of L to the index of the first state falsifying the loop guard G of L or to ∞ if no such state exists. We now formalize termination properties of L using the looping time T ¬G .

Martingales
While for arbitrary probabilistic programs, answering P(T ¬G < ∞) and E(T ¬G < ∞) is undecidable, sufficient conditions for AST, PAST and their negations have been developed [10,19,38,13]. These works use (super)martingales which are special stochastic processes. In this section, we adopt the general setting of martingale theory to a Probsolvable loop L and then formalize sufficient termination conditions for L in Section 3.
Definition 6 (Stochastic Process of L). Every arithmetic expression E over the program variables of L induces the stochastic process In the sequel, for a boolean condition B over program variables x of L, we write B i to refer to the result of substituting x by x i in B.
Definition 7 (Martingales). Let (Ω,Σ,(F i ) i∈N ,P) be a filtered probability space and (M i ) i∈N be an integrable stochastic process adapted to For an arithmetic expression E over the program variables of L,

Proof Rules for Probabilistic Termination
While AST and PAST are undecidable in general [30], sufficient conditions, called proof rules, for AST and PAST have been introduced, see e.g. [10,19,38,13]. In this section, we survey four proof rules, adapted to Prob-solvable loops. In the sequel, a pure invariant is a loop invariant in the classical deterministic sense [27]. Based on the probability space corresponding to L, a pure invariant holds before and after every iteration of L.

Positive Almost Sure Termination (PAST)
The proof rule for PAST introduced in [10] relies on the notion of ranking supermartingales (RSMs), which is a SM that decreases by a fixed positive on average at every loop iteration. Intuitively, RSMs resemble ranking functions for deterministic programs, yet for probabilistic programs.
Theorem 1 (Ranking-Supermartingale-Rule (RSM-Rule) [10], [19]). Let M : R m → R be an expression over the program variables of L and I a pure invariant of L. Assume the following conditions hold for all i ∈ N: Then, L is PAST. Further, M is called an -ranking supermartingale. Figure 1c, set M := 100−x 2 −y 2 and := 2 and let I be true. Condition (1) of Theorem 1 trivially holds. Further, M is also an -ranking supermartingale, as Figure 1c is thus proved PAST using the RSM-Rule.

Almost Sure Termination (AST)
Recall that Figure 1a is AST but not PAST, and hence the RSM-rule cannot be used for Figure 1a. By relaxing the ranking conditions, the proof rule in [38] uses general supermartingales to prove AST of programs that are not necessarily PAST.
Theorem 2 (Supermartingale-Rule (SM-Rule) [38]). Let M : R m → R ≥0 be an expression over the program variables of L and I a pure invariant of L. Let p : R ≥0 → (0,1] (for probability) and d : R ≥0 → R >0 (for decrease) be antitone (i.e. monotonically decreasing) functions. Assume the following conditions hold for all i ∈ N: Intuitively, the requirement of d and p being antitone forbids that the "execution progress" of L towards termination becomes infinitely small while still being positive.

Non-Termination
While Theorems 1 and 2 can be used for proving AST and PAST, respectively, they are not applicable to the analysis of non-terminating Prob-solvable loops. Two sufficient conditions for certifying the negations of AST and PAST have been introduced in [13] using so-called repulsing-supermartingales. Intuitively, a repulsing-supermartingale M on average decreases in every iteration of L and on termination is non-negative. Figuratively, M repulses terminating states.
Theorem 3 (Repulsing-AST-Rule (R-AST-Rule) [13]). Let M : R m → R be an expression over the program variables of L and I a pure invariant of L. Assume the following conditions hold for all i ∈ N:  Figure 1b is not AST.
While Theorem 3 can prove programs not to be AST, and thus also not PAST, it cannot be used to prove programs not to be PAST when they are AST. For example, Theorem 3 cannot be used to prove that Figure 1a is not PAST. To address such cases, a variation of the R-AST-Rule [13] for certifying programs not to be PAST arises by relaxing the condition > 0 of the R-AST-Rule to ≥ 0. We refer to this variation by Repulsing-PAST-Rule (R-PAST-Rule).

Relaxed Proof Rules for Probabilistic Termination
While Theorems 1-3 provide sufficient conditions proving PAST, AST and their negations, the applicability to Prob-solvable loops is somewhat restricted. For example, the RSM-Rule cannot be used to prove Figure 1d to be PAST using the simple expression M := x, as explained in detail with Example 4, but may require more complex witnesses for certifying PAST, complicating automation. In this section, we relax the conditions of Theorems 1-3 by requiring these conditions to only hold "eventually". A property P (i) parameterized by a natural number i ∈ N holds eventually if there is an i 0 ∈ N such that P (i) holds for all i ≥ i 0 . Our relaxations of probabilistic termination proof rules can intuitively be described as follows: If L, after a fixed number of steps, almost surely reaches a state from which the program is PAST or AST, then the program is PAST or AST, respectively. Let us first illustrate the benefits of reasoning with "eventually" holding properties for probabilistic termination in the following example.   Figure 1d either terminates within the first three iterations or, after three loop iterations, is in a state such that the RSM-Rule is applicable. Therefore, Figure 1d  We therefore relax the RSM-Rule and SM-Rule of Theorems 1 and 2 as follows. Proof. We prove the relaxation of the RSM-Rule. The proof of the relaxed SM-Rule is analogous. Let L := I while G do U end be as in Definition 1. Assume L satisfies the conditions (1)-(2) of Theorem 1 after some i 0 ∈ N. We construct the following probabilistic program P, where i is a new variable not appearing in L: We first argue that if P is PAST, then so is L. Assume P to be PAST. Then, the looping time of L is either bounded by i 0 or it is PAST, by the definition of P. In both cases, L is PAST. Finally, observe that P is PAST if and only if its second while-loop is PAST. However, the second while-loop of P can be certified to be PAST using the RSM-Rule and additionally using i ≥ i 0 as an invariant.
Remark 2. The central point of our proof rule relaxations is that they allow for simpler witnesses. While for Example 4 it can be checked that M := x + 2 y+5 is an RSM, the example illustrates that the relaxed proof rule allows for a much simpler PAST witness (linear instead of exponential). This simplicity is key for automation.
Similar to Theorem 4, we relax the R-AST-Rule and the R-PAST-Rule. However, compared to Theorem 4, it is not enough for a non-termination proof rule to certify non-AST from some state onward, because L may never reach this state as it might terminate earlier. Therefore, a necessary assumption when relaxing non-termination proof rules comes with ensuring that L has a positive probability of reaching the state after which a proof rule witnesses non-termination. This is illustrated in the following example .
However, after the first iteration of L, M satisfies all requirements of the R-AST-Rule. Moreover, L always reaches the second iteration because in the first iteration x almost surely does not change. From this follows that Figure 2b is not AST.
The following theorem formalizes the observation of Example 5 relaxing the R-AST-Rule and R-PAST-Rule of Theorem 3.

Theorem 5 (Relaxed Non-Termination Proof Rules for). For the R-AST-Rule to certify non-AST for L (Theorem 3), as well as for the R-PAST-Rule to certify non-PAST for
The proof of Theorem 5 is similar to the one of Theorem 4 and available in [40]. In what follows, whenever we write RSM-Rule, SM-Rule, R-AST-Rule or R-PAST-Rule we refer to our relaxed versions of the proof rules.

Algorithmic Termination Analysis through Asymptotic Bounds
The two major challenges when automating reasoning with the proof rules of Sections 3 and 4 are (i) constructing expressions M over the program variables and (ii) proving inequalities involving E(M i+1 −M i |F i ). In this section, we address these two challenges for Prob-solvable loops. For the loop guard G L = P > Q, let G L denote the polynomial P − Q. As before, if L is clear from the context, we omit the subscript L. It holds that G> 0 is equivalent to G.
for some p ∈ (0,1] and d ∈ R + (for the purpose of efficient automation, we restrict the functions d(r) and p(r) to be constant) c, for some c> 0. All these conditions express bounds over G i . Choosing G as the potential witness may seem simplistic. However, Example 4 already illustrated how our relaxed proof rules can mitigate the need for more complex witnesses (even exponential ones). The computational effort in our approach does not lie in synthesizing a complex witness but in constructing asymptotic bounds for the loop guard. Our approach can therefore be seen as complementary to approaches synthesizing more complex witnesses [10,11,13]. The martingale expression E(G i+1 −G i |F i ) is an expression over program variables, whereas G i+1 −G i cannot be interpreted as a single expression but through a distribution of expressions.

Definition 8 (One-step Distribution). For expression H over the program variables of
The notation U H L is chosen to suggest that the loop body U L is "applied" to the expression H, leading to a distribution over expressions. Intuitively, the support supp(U H L ) of an expression H contains all possible updates of H after executing a single iteration of U L .
(ii) Proving inequalities involving E(M i+1 −M i |F i ): To automate the termination analysis of L with the proof rules from Section 3, we need to compute bounds for the expression E(G i+1 −G i |F i ) as well as for the branches of G. In addition, our relaxed proof rules from Section 4 only need asymptotic bounds, i.e. bounds which hold eventually. In Section 5.2, we propose Algorithm 1 for computing asymptotic lower and upper bounds for any polynomial expression over program variables of L. Our procedure allows us to derive bounds for E(G i+1 −G i |F i ) and the branches of G. Before formalizing our method, let us first illustrate how reasoning with asymptotic bounds helps to apply termination proof rules to L.
Example 6 (Asymptotic Bounds for the RSM-Rule). Consider the following program: x := 1, y := 0 while x < 100 do y := y+1 we could certify the program to be PAST using the RSM- However, by taking a closer look at the variable x, we observe that it is eventually and almost surely lower bounded by the function α · 2 −i for some α ∈ R + . Therefore, eventually 2 for some γ ∈ R + . By our RSM-Rule, the program is PAST. Now, the question arises how the asymptotic lower bound α·2 −i for x can be computed automatically. In every iteration, x is either updated with 2x+y 2 or 1 /2·x. Considering the updates as recurrences, we have the inhomogeneous parts y 2 and 0. Asymptotic lower bounds for these parts are i 2 and 0, respectively, where 0 is the "asymptotically smallest one". Taking 0 as the inhomogeneous part, we construct two recurrences: (1) l 0 = α, l i+1 = 2l i +0 and (2) l 0 = α, l i+1 = 1 /2·l i +0, for some α ∈ R + . Solutions to these recurrences are α·2 i and α·2 −i , where the last one is the desired lower bound because it is "asymptotically smaller". We will formalize this idea of computing asymptotic bounds in Algorithm 1.
We next present our method for computing asymptotic bounds over martingale expressions in Sections 5.1-5.2. Based on these asymptotic bounds, in Section 5.3 we introduce algorithmic approaches for our proof rules from Section 4, solving our aforementioned challenges (i)-(ii) in a fully automated manner (Section 5.4).

Prob-solvable Loops and Monomials
Algorithm 1 computes asymptotic bounds on monomials over program variables in a recursive manner. To ensure termination of Algorithm 1, it is important that there are no circular dependencies among monomials. By the definition of Prob-solvable loops, this indeed holds for program variables (monomials of order 1). Every Prob-solvable loop L comes with an ordering on its variables and every variable is restricted to only depend linearly on itself and polynomially on previous variables. Acyclic dependencies naturally extend from single variables to monomials.   (q m ,...,q 1 ), where ≤ lex is the lexicographic order on N m . The order is total because ≤ lex is total. With y 1 ≺ y 2 we denote y 1 y 2 ∧y 1 = y 2 .
To prove acyclic dependencies for monomials we exploit the following fact. Lemma 1. Let y 1 ,y 2 ,z 1 ,z 2 be monomials. If y 1 z 1 and y 2 z 2 then y 1 ·y 2 z 1 ·z 2 .
By structural induction over monomials and Lemma 1, we establish: Lemma 2 (Monomial Acyclic Dependency). Let x be a monomial over the program variables of L. For every branch B ∈ supp(U x L ) and monomial y in B, y x holds.
Lemma 2 states that the value of a monomial x over the program variables of L only depends on the value of monomials y which precede x in the monomial ordering . This ensures the dependencies among monomials over the program variables of L to be acyclic.

Computing Asymptotic Bounds for Prob-solvable Loops
The structural result on monomial dependencies from Lemma 2 allows for recursive procedures over monomials. This is exploited in Algorithm 1 for computing asymptotic bounds for monomials. The standard Big-O notation does not differentiate between positive and negative functions, as it considers the absolute value of functions. We, however, need to differentiate between functions like 2 i and −2 i . Therefore, we introduce the notions of Domination and Bounding Functions.
Intuitively, a function f dominates a function g if f eventually surpasses g modulo a positive constant factor. Exponential polynomials are sums of products of polynomials with exponential functions, i.e. j p j (x) · c x j , where c j ∈ R + 0 . All functions arising in Algorithms 1-4 are exponential polynomials. For a finite set F of exponential polynomials, a function dominating F and a function dominated by F are easily computable with standard techniques, by analyzing the terms of the functions in the finite set F . With dominating(F ) we denote an algorithm computing an exponential polynomial dominating F . With dominated (F ) we denote an algorithm computing an exponential polynomial dominated by F . We assume the functions returned by the algorithms dominating(F ) and dominated (F ) to be monotone and either non-negative or non-positive.

Example 7 (Domination).
The following statements are true:

An absolute bounding function for E is an upper bounding function for |E|.
A bounding function imposes a bound on an expression E over the program variables holding eventually, almost surely, and modulo a positive constant factor. Moreover, bounds on E only need to hold as long as the program has not yet terminated.
Given a Prob-solvable loop L and a monomial x over the program variables of L, Algorithm 1 computes a lower and upper bounding function for x. Because every polynomial expression is a linear combination of monomials, the procedure can be used to compute lower and upper bounding functions for any polynomial expression over L's program variables by substituting every monomial with its lower or upper bounding function depending on the sign of the monomial's coefficient. Once a lower bounding function l and an upper bounding function u are computed, an absolute bounding function can be computed by dominating ({u,−l}).
In Algorithm 1, candidates for bounding functions are modeled using recurrence relations. Solutions s(i) of these recurrences are closed-form candidates for bounding functions parameterized by loop iteration i. Algorithm 1 relies on the existence of closedform solutions of recurrences. While closed-forms of general recurrences do not always exist, a property of C-finite recurrences, linear recurrences with constant coefficients, is that their closed-forms always exist and are computable [32]. In all occurring recurrences, we consider a monomial over program variables as a single function. Therefore, throughout this section, all recurrences arising from a Prob-solvable loop L in Algorithm 1 are C-finite or can be turned into C-finite recurrences. Moreover, closed-forms s(i) of C-finite recurrences are given by exponential polynomials. Therefore, for any solution s(i) to a C-finite recurrence and any constant r ∈ R, the following holds: Intuitively, the property states that constant shifts do not change the asymptotic behavior of s. We use this property at various proof steps in this section. Moreover, we recall that limits of exponential polynomials are computable [23]. Lemma 2, the computability of closed-forms of C-finite recurrences and the fact that within a Prob-solvable loop only finitely many monomials can occur, implies the termination of Algorithm 1. Its correctness is stated in the next theorem. Proof. Intuitively, it has to be shown that regardless of the paths through the loop body taken by any program run, the value of x is always eventually upper bounded by some function in uCand and eventually lower bounded by some function in lCand (almost surely and modulo positive constant factors). We show that x is always eventually upper bounded by some function in uCand . The proof for the lower bounding function is analogous.
Let ϑ ∈ Σ be a possible program run, i.e. P(Cyl(π)) > 0 for all finite prefixes π of ϑ. Then, for every i ∈ N, if T ¬G (ϑ) >i, the following holds: where a (j) ∈ Rec(x) and P (j) ∈ Inhom(x) are polynomials over program variables. Let u 1 (i),...,u k (i) be upper bounding functions of P (1) ,...,P (k) , which are computed recursively at line 10. Moreover, let U (i):= dominating({u 1 (i),...,u k (i)}), minRec = minRec(x) and maxRec = maxRec(x). Let l 0 ∈ N be the smallest number such that for all j ∈{1,...,k} and i ≥ l 0 : Thus, all inequalities from the bounding functions u j and the dominating function U hold from l 0 onward. Because U is a dominating function, it is by definition either non-negative or non-positive. Assume U (i) to be non-negative, the case for which U (i) is non-positive is symmetric. Using the facts (3) and (4), we establish: For the constant γ := β ·max j=1..k α j , it holds that P(P (j)i ≤ γ ·U (i) | T ¬G >i)=1 for all j ∈{1,...,k} and all i ≥ l 0 . Let l 1 be the smallest number such that l 1 ≥ l 0 and U (i+l 0 ) ≤ δ·U (i) for all i ≥ l 1 and some δ ∈ R + .
Case 1, x i is almost surely negative for all i ≥ l 1 : Consider the recurrence relation y 0 = m, y i+1 = minRec · y i + η · U (i), where η := max(γ,δ) and m is the maximum value of x l1 (ϑ) among all possible program runs ϑ. Note that m exists because there are only finitely many values x l1 (ϑ) for possible program runs ϑ. Moreover, m is negative by our case assumption. By induction, we get P(x i ≤ y i−l1 | T ¬G > i) = 1 for all i ≥ l 1 . Therefore, for a closed-form solution s(i) of the recurrence relation y i , we get P(x i ≤ s(i−l 1 ) | T ¬G >i)=1 for all i ≥ l 1 . We emphasize that s exists and can effectively be computed because y i is C-finite. Moreover, s(i−l 1 ) ≤ θ ·s(i) for all i ≥ l 2 for some l 2 ≥ l 1 and some θ ∈ R + . Therefore, s satisfies the bound condition of an upper bounding function. Also, s is present in uCand by choosing the symbolic constants c 2 and d to represent −m and η respectively. The function u(i):= dominating(uCand ), at line 12, is dominating uCand (hence also s), is monotone and either non-positive or non-negative. Therefore, u(i) is an upper bounding function for x.
for some α ∈ R + . The inequality results from replacing x i by l(i). Therefore, eventually and almost surely − xi 4 − i 2 2 −i− 1 2 ≤−β ·i 2 for some β ∈ R + . Thus, −i 2 is an upper bounding function for the expression − xi Remark 3. Algorithm 1 describes a general procedure computing bounding functions for special sequences. Figuratively, that is for sequences s such that s i+1 = f (s i ,i) but in every step the function f is chosen non-deterministically among a fixed set of special functions (corresponding to branches in our case). We reserve the investigation of applications of bounding functions for such sequences beyond the probabilistic setting for future work.

Algorithms for Termination Analysis of Prob-solvable Loops
Using Algorithm 1 to compute bounding functions for polynomial expressions over program variables at hand, we are now able to formalize our algorithmic approaches automating the termination analysis of Prob-solvable loops using the proof rules from Section 4. Given a Prob-solvable loop L and a polynomial expression E over L's variables, we denote with lbf (E), ubf (E) and abf (E) functions computing a lower, upper and absolute bounding function for E respectively. Our algorithmic approach for proving PAST using the RSM-Rule is given in Algorithm 2.
We obtain the upper bounding function u(i) := −i 2 for E. Because lim i→∞ u(i) < 0, Algorithm 2 returns true. This is valid because u(i) having a negative limit witnesses that E is eventually bounded by a negative constant and therefore is eventually an RSM.
We recall that all functions arising from L are exponential polynomials (see Section 5.2) and that limits of exponential polynomials are computable [23]. Therefore, the termination of Algorithm 2 is guaranteed and its correctness is stated next.
Theorem 7 (Correctness of Algorithm 2). If Algorithm 2 returns true on input L, then L with G L satisfies the RSM-Rule.
Proof. When returning true at line 4 we have P(E i ≤ α · u(i) | T ¬G > i) = 1 for all i ≥ i 0 and some i 0 ∈ N, α ∈ R + . Moreover, u(i) < − for all i ≥ i 1 for some i 1 ∈ N, by the definition of lim. From this follows that ∀i ≥ max(i 0 ,i 1 ) almost surely G i =⇒ E(G i+1 −G i |F i ) ≤−α· , which means G is eventually an RSM.
Our approach proving AST using the SM-Rule is captured with Algorithm 3. The proof of Theorem 8, as well as of Theorem 9, are similar to the one of Theorem 7 and can be found in [40]. As established in Section 4, the relaxation of the R-AST-Rule requires that there is a positive probability of reaching the iteration i 0 after which the conditions of the proof rule hold. Regarding automation, we strengthen this condition by ensuring that there is a positive probability of reaching any iteration, i.e. ∀i ∈ N : P(G i ) > 0. Obviously, this implies P(G i0 ) > 0. Furthermore, with CanReachAnyIteration(L) we denote a computable under-approximation of ∀i ∈ N : P(G i ) > 0. That means, CanReachAnyIteration(L) implies ∀i ∈ N : P(G i ) > 0. Our approach proving non-AST is summarized in Algorithm 4.
(modulo a positive constant factor). Therefore, the algorithm returns true. This is correct, as all the preconditions of the R-AST-Rule are satisfied (and therefore L is not AST).

Theorem 9 (Correctness of Algorithm 4). If Algorithm 4 returns true on input L, then L with −G L satisfies the R-AST-Rule.
Because the R-PAST-Rule is a slight variation of the R-AST-Rule, Algorithm 4 can be slightly modified to yield a procedure for the R-PAST-Rule. An algorithm for the R-PAST-Rule is provided in [40].

Ruling out Proof Rules for Prob-Solvable Loops
A question arising when combining our algorithmic approaches from Section 5.3 into a unifying framework is that, given a Prob-solvable loop L, what algorithm to apply first for determining L's termination behavior? In [4] the authors provide an algorithm for computing an algebraically closed-form of E(M i ), where M is a polynomial over L's variables. The following lemma explains how the expression E(M i+1 −M i ) relates to the expression E(M i+1 −M i |F i ). The lemma follows from the monotonicity of E.
Lemma 3 (Rule out Rules for L). Let (M i ) i∈N be a stochastic process.
The contrapositive of Lemma 3 provides a criterion to rule out the viability of a given proof rule.
is computed as in [4]. Therefore, in some cases, proof rules can automatically be deemed nonviable, without the need to compute bounding functions.
6 Implementation and Evaluation

Implementation
We implemented and combined our algorithmic approaches from Section 5 in the new software tool AMBER to stand for Asymptotic Martingale Bounds. AMBER and all bench-marks are available at https://github.com/probing-lab/amber. AMBER uses MORA [4] [6] for computing the first-order moments of program variables and the DIOFANT package 5 as its computer algebra system.  Bound Computation Improvements In addition to Algorithm 1 computing bounding functions for monomials of program variables, AMBER implements the following refinements: 1. A monomial x is deterministic, which means it is independent of probabilistic choices, if x has a single branch and only depends on monomials having single branches. In this case, the exact value of x in any iteration is given by its first-order moments and bounding functions can be obtained by using these exact representations. 2. Bounding functions for an odd power p of a monomial x can be computed by u(i) p and l(i) p , where u(i) is an upper-and l(i) a lower bounding function for x.
Whenever the above enhancements are applicable, AMBER prefers them over Algorithm 1.

Experimental Setting and Results
Experimental Setting and Comparisons Regarding programs which are PAST, we compare AMBER against the tool ABSYNTH [42] and the tool in [10] which we refer to as MGEN. ABSYNTH uses a system of inference rules over the syntax of probabilistic programs to derive bounds on the expected resource consumption of a program and can, therefore, be used to certify PAST. In comparison to AMBER, ABSYNTH requires the degree of the bound to be provided upfront. Moreover, ABSYNTH cannot refute the existence of a bound and therefore cannot handle programs that are not PAST. MGEN uses linear programming to synthesize linear martingales and supermartingales for probabilistic transition systems with linear variable updates. To certify PAST, we extended MGEN [10] with the SMT solver Z3 [41] in order to find or refute the existence of conical combinations of the (super)martingales derived by MGEN which yield RSMs.
With AMBER-LIGHT we refer to a variant of AMBER without the relaxations of the proof rules introduced in Section 4. That is, with AMBER-LIGHT the conditions of the proof rules need to hold for all i ∈ N, whereas with AMBER the conditions are allowed to only hold eventually. For all benchmarks, we compare AMBER against AMBER-LIGHT to show the effectiveness of the respective relaxations. For each experimental table (Tables 1-3), symbolizes that the respective tool successfully certified PAST/AST/non-AST for the given program; means it failed to certify PAST/AST/non-AST. Further, NA indicates the respective tool failed to certify PAST/AST/non-AST because the given program is out-of-scope of the tool's capabilities. Every benchmark has been run on a machine with a 2.2 GHz Intel i7 (Gen 6) processor and 16 GB of RAM and finished within a timeout of 50 seconds, where most benchmarks terminated within a few seconds.
Benchmarks We evaluated AMBER against 38 probabilistic programs. We present our experimental results by separating our benchmarks within three categories: (i) 21 programs which are PAST (Table 1), (ii) 11 programs which are AST (Table 2) but not necessarily PAST, and (iii) 6 programs which are not AST ( Table 3). The benchmarks have either been introduced in the literature on probabilistic programming [42,10,4,22,38], are adaptations of well-known stochastic processes or have been designed specifically to test unique features of AMBER, like the ability to handle polynomial real arithmetic.
The 21 PAST benchmarks consist of 10 programs representing the original benchmarks of MGEN [10] and ABSYNTH [42] augmented with 11 additional probabilistic programs. Not all benchmarks of MGEN and ABSYNTH could be used for our comparison as MGEN and ABSYNTH target related but different computation tasks than certifying PAST. Namely, MGEN aims to synthesize (super)martingales, but not ranking ones, whereas ABSYNTH focuses on computing bounds on the expected runtime. Therefore, we adopted all (50) benchmarks from [10] (11) and [42] (39) for which the termination behavior is non-trivial. A benchmark is trivial regarding PAST if either (i) there is no loop, (ii) the loop is bounded by a constant, or (iii) the program is meant to run forever. Moreover, we cleansed the benchmarks of programs for which the witness for PAST is just a trivial combination of witnesses for already included programs. For instance, the benchmarks of [42] contain multiple programs that are concatenated constant biased-random-walks. These are relevant benchmarks when evaluating ABSYNTH for discovering bounds, but would blur the picture when comparing against AMBER for PAST certification. With  these criteria, 10 out of the 50 original benchmarks of [10] and [42] remain. We add 11 additional benchmarks which have either been introduced in the literature on probabilistic programming [4,22,38], are adaptations of well-known stochastic processes or have been designed specifically to test unique features of AMBER. Notably, out of the 50 original benchmarks from [42] and [10], only 2 remain which are included in our benchmarks and which AMBER cannot prove PAST (because they are not Prob-solvable). All our benchmarks are available at https://github.com/probing-lab/amber. Experiments with AST - Table 2: We compare AMBER against AMBER-LIGHT on 11 benchmarks which are AST but not necessarily PAST and also cannot be split into PAST subprograms. Therefore, the SM-Rule is needed to certify AST. To the best of our knowledge, AMBER is the first tool able to certify AST for such programs. Existing approaches like [1] and [14] can only witness AST for non-PAST programs, if -intuitively speaking -the programs contain subprograms which are PAST. Therefore, we compared AMBER only against AMBER-LIGHT on this set of examples. The benchmark symmet-ric_2d_random_walk, which AMBER fails to certify as AST, models the symmetric random walk in R 2 and is still out of reach of current automation techniques. In [38] the authors mention that a closed-form expression M and functions p and d satisfying the conditions of the SM-Rule have not been discovered yet. The benchmark fair_in_limit_random_walk involves non-constant probabilities and can therefore not be modeled as a Prob-solvable loop.

Experiments with PAST -
Experiments with non-AST - Table 3: We compare AMBER against AMBER-LIGHT on 6 benchmarks which are not AST. To the best of our knowledge, AMBER is the first tool able to certify non-AST for such programs, and thus we compared AMBER only against AMBER-LIGHT. In [13], where the notion of repulsing supermartingales and the R-AST-Rule are introduced, the authors also propose automation techniques. However, the authors of [13] claim that their "experimental results are basic" and their computational methods are evaluated on only 3 examples, without having any available tool support. For the benchmarks in Table 3, the outcomes of AMBER and AMBER-LIGHT coincide. The reason for this is R-AST-Rule's condition that the martingale expression has to have c-bounded differences. This condition forces a suitable martingale expression to be bounded by a linear function, which is also the reason why AMBER cannot certify the benchmark polynomial_nast.
Experimental Summary Our results from Tables 1-3 demonstrate that: -AMBER outperforms the state-of-the-art in automating PAST certification for Probsolvable loops (Table 1). -Complex probabilistic programs which are AST and not PAST as well as programs which are not AST can automatically be certified as such by AMBER (Tables 2, 3). -The relaxations of the proof rules introduced in Section 4 are helpful in automating the termination analysis of probabilistic programs, as evidenced by the performance of AMBER against AMBER-LIGHT (Tables 1-3).

Related Work
Proof Rules for Probabilistic Termination Several proof rules have been proposed in the literature to provide sufficient conditions for the termination behavior of probabilistic programs. The work of [10] uses martingale theory to characterize positive almost sure termination (PAST). In particular, the notion of a ranking supermartingale (RSM) is introduced together with a proof rule (RSM-Rule) to certify PAST, as discussed in Section 3.1. The approach of [19] extended this method to include (demonic) non-determinism and continuous probability distributions, showing the completeness of the RSM-Rule for this program class. The compositional approach proposed in [19] was further strengthened in [29] to a sound approach using the notion of descent supermartingale map. In [1], the authors introduced lexicographic RSMs. The SM-Rule discussed in Section 3.2 was introduced in [38]. It is worth mentioning that this proof rule is also applicable to non-deterministic probabilistic programs. The work of [28] presented an independent proof rule based on supermartingales with lower bounds on conditional absolute differences. Both proof rules are based on supermartingales and   can certify AST for programs that are not necessarily PAST. The approach of [43] examined martingale-based techniques for obtaining bounds on reachability probabilities -and thus termination probabilities-from an order-theoretic viewpoint. The notions of nonnegative repulsing supermartingales and γ-scaled submartingales, accompanied by sound and complete proof rules, have also been introduced. The R-AST-Rule from Section 3.3 was proposed in [13] mainly for obtaining bounds on the probability of stochastic invariants. An alternative approach is to exploit weakest precondition techniques for probabilistic programs, as presented in the seminal works [34,35] that can be used to certify AST. The work of [37] extended this approach to programs with non-determinism and provided several proof rules for termination. These techniques are purely syntax-based. In [31] a weakest precondition calculus for obtaining bounds on expected termination times was proposed. This calculus comes with proof rules to reason about loops.

Automation of Martingale Techniques
The work of [10] proposed an automated procedure -by using Farkas' lemma -to synthesize linear (super)martingales for probabilistic programs with linear variable updates. This technique was considered in our experimental evaluation, cf. Section 6. The algorithmic construction of supermartingales was extended to treat (demonic) non-determinism in [12] and to polynomial supermartingales in [11] using semi-definite programming. The recent work of [14] uses ω-regular decomposition to certify AST. They exploit so-called localized ranking supermartingales, which can be synthesized efficiently but must be linear.
Other Approaches Abstract interpretation is used in [39] to prove the probabilistic termination of programs for which the probability of taking a loop k times decreases at least exponentially with k. In [18], a sound and complete procedure deciding AST is given for probabilistic programs with a finite number of reachable states from any initial state.
The work of [42] gave an algorithmic approach based on potential functions for computing bounds on the expected resource consumption of probabilistic programs. In [36], model checking is exploited to automatically verify whether a parameterized family of probabilistic concurrent systems is AST.
Finally, the class of Prob-solvable loops considered in this paper extends [4] to a wider class of loops. While [4] focused on computing statistical higher-order moments, our work addresses the termination behavior of probabilistic programs. The related approach of [22] computes exact expected runtimes of constant probability programs and provides a decision procedure for AST and PAST for such programs. Our programming model strictly generalizes the constant probability programs of [22], by supporting polynomial loop guards, updates and martingale expressions.

Conclusion
This paper reported on the automation of termination analysis of probabilistic whileprograms whose guards and expressions are polynomial expressions. To this end, we introduced mild relaxations of existing proof rules for AST, PAST, and their negations, by requiring their sufficient conditions to hold only eventually. The key to our approach is that the structural constraints of Prob-solvable loops allow for automatically computing almost sure asymptotic bounds on polynomials over program variables. Prob-solvable loops cover a vast set of complex and relevant probabilistic processes including random walks and dynamic Bayesian networks [5]. Only two out of 50 benchmarks in [10,42] are outside the scope of Prob-solvable loops regarding PAST certification. The almost sure asymptotic bounds were used to formalize algorithmic approaches for proving AST, PAST, and their negations. Moreover, for Prob-solvable loops four different proof rules from the literature uniformly come together in our work.
Our approach is implemented in the software tool AMBER (github.com/probinglab/amber), offering a fully automated approach to probabilistic termination. Our experimental results show that our relaxed proof rules enable proving probabilistic (non-) termination of more programs than could be treated before. A comparison to the state-ofart in automated analysis of probabilistic termination reveals that AMBER significantly outperforms related approaches. To the best of our knowledge, AMBER is the first tool to automate AST, PAST, non-AST and non-PAST in a single tool-chain.
There are several directions for future work. These include extensions to Prob-solvable loops such as symbolic distributions, more complex control flow, and non-determinism. We will also consider program transformations that translate programs into our format. Extensions of the SM-Rule algorithm with non-constant probability and decrease functions are also in our interest.