Tail Probabilities for Randomized Program Runtimes via Martingales for Higher Moments

Programs with randomization constructs is an active research topic, especially after the recent introduction of martingale-based analysis methods for their termination and runtimes. Unlike most of the existing works that focus on proving almost-sure termination or estimating the expected runtime, in this work we study the tail probabilities of runtimes-such as"the execution takes more than 100 steps with probability at most 1%."To this goal, we devise a theory of supermartingales that overapproximate higher moments of runtime. These higher moments, combined with a suitable concentration inequality, yield useful upper bounds of tail probabilities. Moreover, our vector-valued formulation enables automated template-based synthesis of those supermartingales. Our experiments suggest the method's practical use.


Introduction
The important roles of randomization in algorithms and software systems are nowadays well-recognized. In algorithms, randomization can bring remarkable speed gain at the expense of small probabilities of imprecision. In cryptography, many encryption algorithms are randomized in order to conceal the identity of plaintexts. In software systems, randomization is widely utilized for the purpose of fairness, security and privacy.
Embracing randomization in programming languages has therefore been an active research topic for a long time. Doing so does not only offer a solid infrastructure that programmers and system designers can rely on, but also opens up the possibility of language-based, static analysis of properties of randomized algorithms and systems.
The current paper's goal is to analyze imperative programs with randomization constructs-the latter come in two forms, namely probabilistic branching and assignment from a designated, possibly continuous, distribution. We shall refer to such programs as randomized programs. 4 Runtime and Termination Analysis of Randomized Programs The runtime of a randomized program is often a problem of our interest; so is almost-sure termination, that is, whether the program terminates with probability 1. In the programming language community, these problems have been taken up by many researchers as a challenge of both practical importance and theoretical interest.
Most of the existing works on runtime and termination analysis follow either of the following two approaches.
-Martingale-based methods, initiated with a notion of ranking supermartingale in [5] and extended [1,7,8,12,15], have their origin in the theory of stochastic processes. They can also be seen as a probabilistic extension of ranking functions, a standard proof method for termination of (non-randomized) programs. Martingale-based methods have seen remarkable success in automated synthesis using templates and constraint solving (like LP or SDP). -The predicate-transformer approach,pursued in [3,19,20],uses a more syntaxguided formalism of program logic and emphasizes reasoning by invariants.
The essential difference between the two approaches is not big: an invariant notion in the latter is easily seen to be an adaptation of a suitable notion of supermartingale. The work [33] presents a comprehensive account on the ordertheoretic foundation behind these techniques. These existing works are mostly focused on the following problems: deciding almost-sure termination, computing termination probabilities, and computing expected runtime. (Here "computing" includes giving upper/lower bounds.) See [33] for a comparison of some of the existing martingale-based methods. Our Problem: Tail Probabilities for Runtimes In this paper we focus on the problem of tail probabilities that is not studied much so far. 5 We present a method for overapproximating tail probabilities; here is the problem we solve.
Input: a randomized program Γ , and a deadline d ∈ N Output: an upper bound of the tail probability Pr(T run ≥ d), where T run is the runtime of Γ Our target language is a imperative language that features randomization (probabilistic branching and random assignment). We also allow nondeterminism; this makes the program's runtime depend on the choice of a scheduler (i.e. how nondeterminism is resolved). In this paper we study the longest, worst-case runtime (therefore our scheduler is demonic). In the technical sections, we use the presentation of these programs as probabilistic control graphs (pCFGs)-this is as usual in the literature. See e.g. [1,33]. if * then 5 x := x + z 6 else 7 y := y + z 8 fi 9 od An example of our target program is in Fig. 1. It is an imperative program with randomization: in Line 3, the value of z is sampled from the uniform distribution over the interval [−2, 1]. The symbol in the line 4 stands for a nondeterministic Boolean value; in our analysis, it is resolved so that the runtime becomes the longest. Given the program in Fig. 1 and a choice of a deadline (say d = 400), we can ask the question "what is the probability Pr(T run ≥ d) for the runtime T run of the program to exceed d = 400 steps?" As we show in §6, our method gives a guaranteed upper bound 0.0684. This means that, if we allow the time budget of d = 400 steps, the program terminates with the probability at least 93%.
Our Method: Concentration Inequalities, Higher Moments, and Vector-Valued Supermartingales Towards the goal of computing tail probabilities, our approach is to use concentration inequalities, a technique from probability theory that is commonly used for overapproximating various tail probabilities. There are various concentration inequalities in the literature, and each of them is applicable in a different setting, such as a nonnegative random variable (Markov's inequality), known mean and variance (Chebyshev's inequality), a difference-bounded martingale (Azuma's inequality), and so on. Some of them were used for analyzing randomized programs [6] (see §7 for comparison).
In this paper, we use a specific concentration inequality that uses higher moments E[T run ], . . . , E[(T run ) K ] of runtimes T run , up to a choice of the maximum degree K. The concentration inequality is taken from [4]; it generalizes Markov's and Chebyshev's. We observe that a higher moment yields a tighter bound of the tail probability, as the deadline d grows bigger. Therefore it makes sense to strive for computing higher moments.
For computing higher moments of runtimes, we systematically extend the existing theory of ranking supermartingales, from the expected runtime (i.e. the first moment) to higher moments. The theory features a vector-valued supermartingale, which not only generalizes easily to degrees up to arbitrary K ∈ N, but also allows automated synthesis much like usual supermartingales.
We also claim that the soundness of these vector-valued supermartingales is proved in a mathematically clean manner. Following our previous work [33], our arguments are based on the order-theoretic foundation of fixed points (namely the Knaster-Tarski, Cousot-Cousot and Kleene theorems), and we give upper bounds of higher moments by suitable least fixed points.
Overall, our workflow is as shown in Fig. 2. We note that the step 2 in Fig. 2 is computationally much cheaper than the step 1: in fact, the step 2 yields a symbolic expression for an upper bound in which d is a free variable. This makes it possible to draw graphs like the ones in Fig. 3. It is also easy to find a deadline d for which Pr(T run ≥ d) is below a given threshold p ∈ [0, 1].
We implemented a prototype that synthesizes vector-valued supermartingales using linear and polynomial templates. The resulting constraints are solved by LP and SDP solvers, respectively. Experiments show that our method can produce nontrivial upper bounds in reasonable computation time. We also experimentally confirm that higher moments are useful in producing tighter bounds. Our Contributions Summarizing, the contribution of this paper is as follows.
-We extend the existing theory of ranking supermartingales from expected runtimes (i.e. the first moment) to higher moments. The extension has a solid foundation of order-theoretic fixed points. Moreover, its clean presentation by vector-valued supermartingales makes automated synthesis as easy as before. Our target randomized programs are rich, embracing nondeterminism and continuous distributions. -We study how these vector-valued supermartingales (and the resulting upper bounds of higher moments) can be used to yield upper bounds of tail probabilities of runtimes. We identify a concentration lemma that suits this purpose. We show that higher moments indeed yield tighter bounds. -Overall, we present a comprehensive language-based framework for overapproximating tail probabilities of runtimes of randomized programs (Fig. 2). It has been implemented, and our experiments suggest its practical use.
Organization We give preliminaries in §2. In §3, we review the order-theoretic characterization of ordinary ranking supermartingales and present an extension to higher moments of runtimes. In §4, we discuss how to obtain an upper bound of the tail probability of runtimes. In §5, we explain an automated synthesis algorithm for our ranking supermartingales. In §6, we give experimental results. In §7, we discuss related work. We conclude and give future work in §8. Some proofs and details are deferred to the appendices.

Preliminaries
We present some preliminary materials, including the definition of pCFGs (we use them as a model of randomized programs) and the definition of runtime. Given topological spaces X and Y , let B(X) be the set of Borel sets on X and B(X, Y ) be the set of Borel measurable functions X → Y . We assume that the set R of reals, a finite set L and the set [0, ∞] are equipped with the usual topology, the discrete topology, and the order topology, respectively. We use the induced Borel structures for these spaces. Given a measurable space X, let D(X) be the set of probability measures on X. For any µ ∈ D(X), let supp(µ) be the support of µ. We write E[X] for the expectation of a random variable X.
Our use of pCFGs follows recent works including [1].
-A finite set of locations L = L D ∪ L P ∪ L N ∪ L A . It is the union of four mutually disjoint sets of deterministic, probabilistic, nondeterministic and assignment locations, respectively.
-A family Pr = (Pr l ) l∈LP of probability distributions, where Pr l ∈ D(L), for probabilistic locations. We require that l ′ ∈ supp(Pr l ) implies l → l ′ .
there exists a unique location l ′ ∈ L satisfying l → l ′ and x ∈ G(l, l ′ ).
The update function can be decomposed into three functions Up D : The elements of L AD , L AP and L AN represent deterministic, probabilistic and nondeterministic assignments, respectively. x := 2 y := 2 x > 0 and y > 0 z := Unif(−2, 1) x := x + z y := y + z x ≤ 0 or y ≤ 0 An example of a pCFG is shown on the right. It models the program in Fig. 1. The node l 4 is a nondeterministic location. Unif(−2, 1) is the uniform distribution on the interval [−2, 1].
A configuration of a pCFG Γ is a pair (l, x) ∈ L × R V of a location and a valuation. We regard the set S = L × R V of configurations as a topological space by assuming that L is equipped with the discrete topology and S is equipped with the product topology. We say a configuration (l ′ , x ′ ) is a successor of (l, x), if l → l ′ and the following hold.
, where x(x j ← a) denotes the vector obtained by replacing the x j -component of x by a. Here x j is such that Up(l) = (x j , u), and a is chosen as follows: An invariant of a pCFG Γ is a measurable set I ∈ B(S) such that (l init , x init ) ∈ I and I is closed under taking successors (i.e. if c ∈ I and c ′ is a successor of c then c ′ ∈ I). Use of invariants is a common technique in automated synthesis of supermartingales [1]: it restricts configuration spaces and thus makes the constraints on supermartingales weaker. A run of Γ is an infinite sequence of configurations c 0 c 1 . . . such that c 0 is the initial configuration (l init , x init ) and c i+1 is a successor of c i for each i. Let Run(Γ ) be the set of runs of Γ .
A scheduler resolves nondeterminism: at a location in L N ∪ L AN , it chooses a distribution of next configurations depending on the history of configurations visited so far. Given a pCFG Γ and a scheduler σ of Γ , a probability measure ν Γ σ on Run(Γ ) is defined in the usual manner. See Appendix B for details. Definition 2.2 (reaching time T Γ C , T Γ C,σ ). Let Γ be a pCFG and C ⊆ S be a set of configurations called a destination. The reaching time to C is a function . Fixing a scheduler σ makes T Γ C a random variable, since σ determines a probability measure ν Γ σ on Run(Γ ). It is denoted by T Γ C,σ .
Runtimes of pCFGs are a special case of reaching times, namely to the set of terminating configurations.
The following higher moments are central to our framework. Recall that we are interested in demonic schedulers, i.e. those which make runtimes longer. . Assume the setting of Def. 2.2, and let k ∈ N and c ∈ S. We write M Γ,k C,σ (c) for the k-th moment of the reaching time of Γ from c to C under the scheduler σ, i.e. that is, where Γ c is a pCFG obtained from Γ by changing the initial configuration to c.

Ranking Supermartingale for Higher Moments
We introduce one of the main contributions in the paper, a notion of ranking supermartingale that overapproximates higher moments. It is motivated by the following observation: martingale-based reasoning about the second moment must concur with one about the first moment. We conduct a systematic theoretical extension that features an order-theoretic foundation and vector-valued supermartingales. The theory accommodates nondeterminism and continuous distributions, too. We omit some details and proofs; they are in Appendix C. The fully general theory for higher moments will be presented in §3.2; we present its restriction to the second moments in §3.1 for readability.
Prior to these, we review the existing theory of ranking supermartingales, through the lens of order-theoretic fixed points. In doing so we follow [33]. -If l ∈ L D and x G(l, l ′ ), then (Xη)(l, x) = η(l ′ , x).
-If l ∈ L A , Up(l) = (x j , u) and l → l ′ , Intuitively, Xη is the expectation of η after one transition. Nondeterminism is resolved by the maximal choice. We define The function space (S → [0, ∞]) is a complete lattice structure, because [0, ∞] is; moreover F 1 is easily seen to be monotone. It is not hard to see either that the expected reaching time M Γ,1 C to C coincides with the least fixed point µF 1 . The following theorem is fundamental in theoretical computer science.
an arbitrary prefixed point of F 1 -which coincides with the notion of ranking supermartingale [5]-overapproximates the expected reaching time. This proves soundness of ranking supermartingales.

Ranking Supermartingales for the Second Moments
We extend ranking supermartingales to the second moments. It paves the way to a fully general theory (up to the K-th moments) in §3.2.
The key in the martingale-based reasoning of expected reaching times (i.e. first moments) was that they are characterized as the least fixed point of a function F 1 . Here it is crucial that for an arbitrary random variable  Then, an extension of F 1 for second moments can be defined as a combination of the time-elapse function El 1 and the pre-expectation X.
We can extend the complete lattice structure of [0, ∞] to the function space S → [0, ∞] 2 in a pointwise manner. It is a routine to prove that F 2 is monotone with respect to this complete lattice structure. Hence F 2 has the least fixed point. In fact, while M To prove the above theorem, we inductively prove for each σ and n, and take the supremum. See Appendix C for more details. Like ranking supermartingale for first moments, ranking supermartingale for second moments is defined as a prefixed point of F 2 , i.e. a function η such that η ≥ F 2 (η). However, we modify the definition for the sake of implementation.
Definition 3.6 (ranking supermartingale for second moments). A ranking supermartingale for second moments is a function η : Even though we only have inequality in Thm. 3.5, we can prove the following desired property of our supermartingale notion.
The following example and theorem show that we cannot replace ≤ with = in Thm. 3.5 in general, but it is possible in the absence of nondeterminism.  ⊓ ⊔

Ranking Supermartingales for the Higher Moments
We extend the result in §3.1 to moments higher than second. Firstly, the time-elapse function El 1 is generalized as follows.
Definition 3.10 (time-elapse function El K,k 1 ). For K ∈ N and k ∈ {1, . . . , K}, a function El K,k Here k j is the binomial coefficient. Again, a monotone function F K is defined as a combination of the time-elapse function El K,k 1 and the pre-expectation X.
Definition 3.11 (F K ). Let I be an invariant and C ⊆ I be a Borel set. We As in Def. 3.6, we define a supermartingale as a prefixed point of F K .
Definition 3.12 (ranking supermartingale for K-th moments). We de- For higher moments, we can prove an analogous result to Thm. 3.7.
Theorem 3.13. If η is a supermartingale for K-th moments, then for each

From Moments to Tail Probabilities via Concentration Inequalities
We discuss how to obtain upper bounds of tail probabilities of runtimes from upper bounds of higher moments of runtimes. Combined with the result in §3, it induces a martingale-based method for overapproximating tail probabilities. We use a concentration inequality. There are many choices of concentration inequalities (see e.g. [4]), and we use a variant of Markov's inequality. We prove that the concentration inequality is not only sound but also complete in a sense.
Formally, our goal is to calculate is an upper bound of Pr(T Γ C,σ ≥ d) for a given deadline d > 0, under the assumption that we know upper bounds In other words, we want to overapproximate sup µ µ([d, ∞]) where µ ranges over the set of probability measures on [0, ∞] satisfying x dµ(x), . . . , x K dµ(x) ≤ (u 1 , . . . , u K ). To answer the above problem, we make use of the following generalized form of Markov's inequality. . Let X be a real-valued random variable and φ be a nondecreasing and nonnegative function. For any d ∈ R with φ(d) > 0, By letting φ(x) = x k in Prop 4.1, we obtain the following inequality. It gives an upper bound of the tail probability that is "tight." (1) Moreover, this upper bound is tight: for any d > 0, there exists a probability measure such that the above equation holds.
Proof. The former part is immediate from Prop 4.1. For the latter part, consider µ = pδ d + (1 − p)δ 0 where δ x is the Dirac measure at x and p is the value of the right-hand side of (1). ⊓ ⊔ By combining Thm. 3.13 with Prop. 4.2, we obtain the following corollary. We can use it for overapproximating tail probabilities. Corollary 4.3. Let η : S → R k be a ranking supermartingale for k-th moments. For each scheduler σ and a deadline d > 0, Here η 0 , . . . , η K are defined by η 0 (c) = 1 and η(c) Hence higher moments become useful in overapproximating tail probabilities as d gets large. Later in §6, we demonstrate this fact experimentally.

Template-Based Synthesis Algorithm
We discuss an automated synthesis algorithm that calculates an upper bound for the k-th moment of the runtime of a pCFG using a supermartingale in Def. 3.6 or Def. 3.12. It takes a pCFG Γ , an invariant I, a set C ⊆ I of configurations, and a natural number K as input and outputs an upper bound of K-th moment.
Our algorithm is adapted from existing template-based algorithms for synthesizing a ranking supermartingale (for first moments) [5,7,8]. It fixes a linear or polynomial template with unknown coefficients for a supermartingale and using numerical methods like linear programming (LP) or semidefinite programming (SDP), calculate a valuation of the unknown coefficients so that the axioms of ranking supermartingale for K-th moments are satisfied.
We hereby briefly explain the algorithms. See Appendix D for details.
Linear Template Our linear template-based algorithm is adapted from [5,8].
We should assume that Γ , I and C are all "linear" in the sense that expressions appearing in Γ are all linear and I and C are represented by linear inequalities.
To deal with assignments from a distribution like x := Norm(0, 1), we also assume that expected values of distributions appearing in Γ are known. The algorithm first fixes a template for a supermartingale: for each location l, it fixes a K-tuple Here each a l j,i and b l i are unknown variables called parameters. The algorithm next collects conditions on the parameters so that the tuples constitute a ranking supermartingale for K-th moments. It results in a conjunction of formulas of a form ϕ 1 ≥ 0 ∧ · · · ∧ ϕ m ≥ 0 ⇒ ψ ≥ 0. Here ϕ 1 , . . . , ϕ m are linear formulas without parameters and ψ is a linear formula where parameters linearly appear in the coefficients. By Farkas' lemma (see e.g. [29,Cor. 7.1h]) we can turn such formulas into linear inequalities over parameters by adding new variables. Its feasibility is efficiently solvable with an LP solver. We naturally wish to minimize an upper bound of the K-th moment, i.e. the last component of η(l init , x init ). We can minimize it by setting it to the objective function of the LP problem.
Polynomial Template The polynomial template-based algorithm is based on [7]. This time, Γ , I and C can be "polynomial." To deal with assignments of distributions, we assume that the n-th moments of distributions in Γ are easily calculated for each n ∈ N. It is similar to the linear template-based one.
It first fixes a polynomial template for a supermartingale, i.e. it assigns each location l a K-tuple of polynomial expressions with unknown coefficients. Likewise the linear template-based algorithm, the algorithm reduces the axioms of supermartingale for higher moments to a conjunction of formulas of a form ϕ 1 ≥ 0 ∧ · · · ∧ ϕ m ≥ 0 ⇒ ψ ≥ 0. This time, each ϕ i is a polynomial formula without parameters and ψ is a polynomial formula whose coefficients are linear formula over the parameters. In the polynomial case, a conjunction of such formula is reduced to an SDP problem using a theorem called Positivstellensatz (we used a variant called Schmüdgen's Positivstellensatz [28]). We solve the resulting problem using an SDP solver setting η(l init , x init ) as the objective function.

Experiments
We implemented two programs in OCaml to synthesize a supermartingale based on a) a linear template and b) a polynomial template. The programs translate a given randomized program to a pCFG and output an LP or SDP problem as described in §5. An invariant I and a terminal configuration C for the input program are specified manually. See e.g. [21] for automatic synthesis of an invariant. For linear templates, we have used GLPK (v4.65) [13] as an LP solver. For polynomial templates, we have used SOSTOOLS (v3.03) [31] (a sums of squares optimization tool that internally uses an SDP solver) on Matlab (R2018b). We used SDPT3 (v4.0) [30] as an SDP solver. The experiments were carried out on a Surface Pro 4 with an Intel Core i5-6300U (2.40GHz) and 8GB RAM. We tested our implementation for the following two programs and their variants, which were also used in the literature [8,20]. Their code is in Appendix E. Coupon collector's problem. A probabilistic model of collecting coupons enclosed in cereal boxes. There exist n types of coupons, and one repeatedly buy cereal boxes until all the types of coupons are collected. We consider two cases: (1-1) n = 2 and (1-2) n = 4. We tested the linear template program for them. Random walk. We used three variants of 1-dimensional random walks: (2-1) integer-valued one, (2-2) real-valued one with assignments from continuous distributions, (2-3) with adversarial nondeterminism; and two variants of 2dimensional random walks (2-4) and (2)(3)(4)(5) with assignments from continuous distributions and adversarial nondeterminism. We tested both the linear and the polynomial template programs for these examples. Experimental results We measured execution times needed for Step 1 in Fig. 2. The results are in Table 1. Execution times are less than 0.2 seconds for linear template programs and several minutes for polynomial template programs. Upper bounds of tail probabilities obtained from Prop. 4.2 are in Fig. 3.
It is expectable that the polynomial template program gives a better bound than the linear one because a polynomial template is more expressive than a linear one. However, it did not hold for some test cases, probably because of numerical errors of the SDP solver. For example, (2-1) has a supermartingale for third moments that can be checked by a hand calculation, but the SDP solver returned "infeasible" in the polynomial template program. It appears that our program fails when large numbers are involved (e.g. the third moments of (2-1),  and (2-3)). We have also tested a variant of (2-1) where the initial position is multiplied by 10000. Then the SDP solver returned "infeasible" in the polynomial template program while the linear template program returns a nontrivial bound. Hence it seems that numerical errors are likely to occur to the polynomial template program when large numbers are involved. Fig. 3 shows that the bigger the deadline d is, the more useful higher moments become (cf. a remark just after Cor To show the merit of our method compared with sampling-based methods, we calculated a tail probability bound for a variant of (2-2) (shown in Fig. 4 on p. 13)) with a deadline d = 10 11 . Because of its very long expected runtime, a sampling-based method would not work for it. In contrast, the linear templatebased program gave an upper bound Pr(T Γ C,σ ≥ 10 11 ) ≤ 5000000025/10 11 ≈ 0.05 in almost the same execution time as (2-2) (< 0.02 seconds).

Related Work
Martingale-Based Analysis of Randomized Programs Martingale-based methods are widely studied for the termination analysis of randomized programs. One of the first is ranking supermartingales, introduced in [5] for proving almost sure termination. The theory of ranking supermartingales has since been extended actively: accommodating nondeterminism [1,7,8,12], syntaxoriented composition of supermartingales [12], proving properties beyond termination/reachability [15], and so on. Automated template-based synthesis of supermartingales by constraint solving has been pursued, too [1,5,7,8].
In the literature on martingale-based methods, the one closest to this work is [6]. Among its contribution is the analysis of tail probabilities. It is done by  Table 1: Upper bounds of the moments of runtimes. "-" indicates that the LP or SDP solver returned "infeasible". The "degree" column shows the degree of the polynomial template used in the experiments. x := x -z 6 else 7 z := Unif (0 ,1) ; 8 x := x + z 9 fi ; 10 refute ( x < 0) 11 od  Table 1 of k-th moments and d is a deadline. Each black line is the minimum of gray lines, i.e. the upper bound by Prop. 4.2. The red lines in (1-1), (1-2) and (2-1) show the true tail probabilities calculated analytically. The red points in  show tail probabilities calculated by Monte Carlo sampling where the number of trials is 100000000. We did not calculate the true tail probabilities nor approximate them for (2-4) and (2)(3)(4)(5) because these examples seem difficult to do so due to nondeterminism. either of the following combinations: 1) difference-bounded ranking supermartingales and the corresponding Azuma's martingale concentration inequality; and 2) (not necessarily difference-bounded) ranking supermartingales and Markov's concentration inequality. When we compare these two methods with ours, the first method requires repeated martingale synthesis for different parameter values, which can pose a performance challenge. The second method corresponds to the restriction of our method to the first moment; recall that we showed the advantage of use of higher moments, theoretically ( §4) and experimentally ( §6). See Appendix F.1 for detailed discussions. Implementation is lacking in [6], too.
The work [1] is also close to ours in that their supermartingales are vectorvalued. The difference is in the orders: in [1] they use the lexicographic order between vectors, and they aim to prove almost sure termination. In contrast, we use the pointwise order between vectors, for overapproximating higher moments.
The Predicate-Transformer Approach to Runtime Analysis In the runtime/termination analysis of randomized programs, another principal line of work uses predicate transformers [3,19,20], following the precedent works on probabilistic predicate transformers such as [22,25]. In fact, from the mathematical point of view, the main construct for witnessing runtime/termination in those predicate transformer calculi (called invariants, see e.g. in [20]) is essentially the same thing as ranking supermartingales. Therefore the difference between the martingale-based and predicate-transformer approaches is mostly the matter of presentation-the predicate-transformer approach is more closely tied to program syntax and has a stronger deductive flavor. It also seems that there is less work on automated synthesis in the predicate-transformer approach.
In the predicate-transformer approach, the work [19] is the closest to ours, in that it studies variance of runtimes of randomized programs. The main differences are as follows: 1) computing tail probabilities is not pursued [19]; 2) their extension from expected runtimes to variance involves an additional variable τ , which poses a challenge in automated synthesis as well as in generalization to even higher moments; and 3) they do not pursue automated analysis. See Appendix F.2 for further details.
Higher Moments of Runtimes Computing and using higher moments of runtimes of probabilistic systems-generalizing randomized programs-has been pursued before. In [10], computing moments of runtimes of finite-state Markov chains is reduced to a certain linear equation. In the study of randomized algorithms, the survey [11] collects a number of methods, among which are some tail probability bounds using higher moments. Unlike ours, none of these methods are language-based static ones. They do not allow automated analysis.
Other Potential Approaches to Tail Probabilities We discuss potential approaches to estimating tail probabilities, other than the martingale-based one.
Sampling is widely employed for approximating behaviors of probabilistic systems; especially so in the field of probabilistic programming languages, since exact symbolic reasoning is hard in presence of conditioning. See e.g. [36]. We also used sampling to estimate tail probabilities in (2-2), Fig. 3. The main advantages of our current approach over sampling are threefold: 1) our upper bounds come with a mathematical guarantee, while the sampling bounds can always be erroneous; 2) it requires ingenuity to sample programs with nondeterminism; and 3) programs whose execution can take millions of years can still be analyzed by our method in a reasonable time, without executing them. The latter advantage is shared by static, language-based analysis methods in general; see e.g. [3].
Another potential method is probabilistic model checkers such as PRISM [23]. Their algorithms are usually only applicable to finite-state models, and thus not to randomized programs in general. Nevertheless, fixing a deadline d can make the reachable part S ≤d of the configuration space S finite, opening up the possibility of use of model checkers. It is an open question how to do so precisely, and the following challenges are foreseen: 1) if the program contains continuous distributions, the reachable part S ≤d becomes infinite; 2) even if S ≤d is finite, one has to repeat (supposedly expensive) runs of a model checker for each choice of d. In contrast, in our method, an upper bound for the tail probability Pr(T run ≥ d) is symbolically expressed as a function of d (Prop. 4.2). Therefore, estimating tail probabilities for varying d is computationally cheap.

Conclusions and Future Work
We provided a technique to obtain an upper bound of the tail probability of runtimes given a randomized algorithm and a deadline. We first extended the ordinary ranking supermartingale notion using the order-theoretic characterization so that it can calculate upper bounds of higher moments of runtimes for randomized programs. Then by using a suitable concentration inequality, we introduced a method to calculate an upper bound of tail probabilities from upper bounds of higher moments. Our method is not only sound but also complete in a sense. Our method was obtained by combining our supermartingale and the concentration inequality. We also implemented an automated synthesis algorithm and demonstrated the applicability of our framework. Future Work Example 3.8 shows that our supermartingale is not complete: it sometimes fails to give a tight bound for higher moments. Studying and improving the incompleteness is one possible direction of future work. For example, the following questions would be interesting: Can the bound given by our supermartingale be arbitrarily bad? Can we remedy the completeness by restricting the type of nondeterminism? Can we define a supermartingale that is complete?
We are also interested in improving the implementation. The polynomial template program failed to give an upper bound for higher moments because of numerical errors (see §6). We wish to remedy this situation. There exist several studies for using numerical solvers for verification without affected by numerical errors [16-18, 26, 27]. We might make use of these works for improvements.

Appendix A Preliminaries on Measure Theory
In this section, we review some results from measure theory that is needed in the rest of the paper. For more details, see e.g. [2,34].
Definition A.1. Let φ : X → Y be a measurable function and µ be a probability measure on X.
be measurable functions and µ be a probability measure on X.
where f • φ denotes the composite function of f and φ. ⊓ ⊔ Lemma A.3. Let (X, B X ) and (Y, B Y ) be measurable spaces and µ x be a probability measure on Y for each x ∈ X. The following conditions are equivalent.

For each
is measurable.
The rest of the proof is easy.
For any f : A → B and any set X, X f : In §B, we use the following corollary of the Kolmogorov extension theorem (see [34, §2.4]).
Corollary A.4. Let (X, B X ) be a measurable space and µ n be an inner regular probability measure on X n for each n < ω. Assume (X n⊆n+1 ) * µ n+1 = µ n . There exists a unique probability measure µ ω on X ω such that (X n⊆ω ) * µ ω = µ n .

B K-th moments of runtimes and rewards
We define a probably measure on the set of runs of a pCFG given a scheduler.
We then define the k-th moment of runtimes. Here we slightly generalize runtime model by considering a reward function and redefine some of the notions to accommodate the reward function. However, this generalization is not essential, and therefore the readers can safely assume that we are just counting the number of steps until termination (by taking the constant function 1 as a reward function). Let Γ = (L, V, l init , x init , →, Up, Pr, G) be a pCFG. A reward function on Γ is a measurable function Rew : S → [0, ∞]. Recall that we regard the set S = L × R V of configurations as the product measurable space of (L, 2 L ) and (R V , B(R V )). A scheduler of Γ resolves two types of nondeterminism: nondeterministic transition and nondeterministic assignment.
Note that if L N = ∅ and L AN = ∅, then there exists only one scheduler that is trivial.
In the rest of the paper, the concatenation of two finite sequences ρ 1 , ρ 2 is denoted by ρ 1 ρ 2 or by ρ 1 · ρ 2 . Given a scheduler σ and a history of configurations ρ ∈ S + , let µ σ ρ be a probability distribution of the next configurations determined by σ.
Definition B.2. Let σ be a scheduler and ρ ∈ S + . A probability measure µ σ ρ on S is defined as follows. · (l, x)).
In each case, it easily follows that g n,l is measurable. Note that δ (−) (E) = 1 E and (x, y) → x(x j ← y) are measurable functions. We use Lemma A.3 for the last two cases.

⊓ ⊔
Given an initial configuration c 0 , let ν σ c0,n be a probability measure on the set {c 0 ρ | ρ ∈ S n } ∼ = S n of paths.
By Corollary A.4, we define a probability measure on S ω . Note that (S, B(S)) is a Polish space (a separable completely metrizable topological space), and hence a Radon space. Therefore, ν σ c0,n is inner regular. Definition B.7. Let ν σ c0 be the probability measure defined as a unique measure such that (S n⊆ω ) * ν σ c0 = ν σ c0,n . Definition B.8 (accumulated reward Rew c0 C ). Given a reward function Rew : S → [0, ∞], let Rew c0 C : S ω → [0, ∞] be a measurable function defined by the sum of the rewards from the initial configuration c 0 to the last configuration before entering C. That is, Rew(c j ) otherwise (i.e. for each i, c i / ∈ C).
Note that Rew(c 0 ) is included in the sum.
is measurable by Lemma A.3. The correspondence of the notations in §2 and in §B is as follows.
where Rew(c) = 1 for each c

C Omitted Details and Proofs in §3
The ultimate goal of this section is to prove Theorem 3.13. In §C.1-C.2, we prove some lemmas regarding to X (Definition 3.1) and El K,k 1 (Definition 3.10). In §C.3, we prove analogous theorem to Theorem 3.7, Theorem 3.9 and Theorem 3.13. In §C.4, we prove Theorem 3.13. We prove them in a generalized way so that an arbitrary reward function is allowed as in §B.

C.1 Basic properties of the pre-expectation
We prove several lemmas for X in Definition 3.1.
The next lemma claims that we can ignore outside of an invariant I. -Let {η n } n<ω and {η ′ n } n<ω be ω-chains. Then we have That is, the addition + is ω-continuous. -Let {η n } n<ω be a ω-chain and a ≥ 0. Then we have a · sup n∈ω η n = sup n∈ω (a · η n ).
These properties are often used in the proofs of ω-continuity in the rest of the paper.
-Assume l ∈ L D and x G(l, l ′ ).
by the monotone convergence theorem. • Assume u ∈ B(R).
The next lemma is a justification of the name "pre-expectation".

C.3 Characterizing higher moments by F K
We prove Theorem 3.7, Theorem 3.9 and Theorem 3.13 in the generalized runtime model. We first extend Definition 3.11 so that an arbitrary reward is allowed.
Definition C.8. Let I be an invariant and C ⊆ I be a Borel set. Let Proof. Immediate from Lemma C.2 and Lemma C.6.
To prove Theorem C.10 and Theorem C.11, we consider an approximation of k-th moments of accumulated rewards up to finite steps.
Definition C.12 (accumulated reward up to n steps). Let Rew c0 C,n : S n → [0, ∞] be a measurable function defined by The definition of Rew c0 C,n is similar to Rew c0 C except that the sum of the value of reward function is restricted to the first n configurations. The next lemma shows a connection between Rew c0 C and Rew c0 C,n .
-Assume there exists N ∈ ω such that c N ∈ C and 0 ≤ j < N ⇒ c j / ∈ C.

⊓ ⊔
The k-th moment of Rew c0 C,n is denoted by M Rew,k C,σ,n (c 0 ).
Definition C.14 (k-th moment up to n steps). A function M Rew,k C,σ,n : S → [0, ∞] is defined as follows.
Proof. The former part is immediate by Lemma C.13. The latter part is proved by the following calculation.
The following lemma easily follows from the definition of µ σ ρ .
The following lemma expresses the n + 1 step approximation M Rew,k C,σ,n+1 in terms of the n step approximations M Rew,1 C,σ c 0 ,n , . . . , M Rew,K C,σ c 0 ,n , which plays a crucial role in the induction step in the proof of Theorem C.10 and Theorem C.11. Proof.
for each σ and n by induction on n.
-If n = 0, the l.h.s. and the r.h.s. are equal to 0.
-If n > 0, it suffices to prove that for each c 0 , there exists σ ′ such that 2. We take supremum of (6) with respect to n, and then with respect to σ.
Proof (Theorem C.11). Here, we prove The following definition and theorem generalize Definition 3.12 and Theorem 3.13, respectively.
Theorem C.20. If η is a supermartingale for K-th moments, then for any c ∈ I, (M By Lemma C.1, F K (η ′ ) ≤ η ′ is easily proved. By the Knaster-Tarski theorem, we have µF K ≤ η ′ . Therefore for each c ∈ I. ⊓ ⊔

D Details of Template-Based Synthesis Algorithm
In this section we describe the template-based synthesis algorithms in §5 in more detail. They are based on existing template-based algorithms for synthesizing a ranking supermartingale for first moments in [5,7,8]. Input to the algorithm is a pCFG Γ , an invariant I, a set C ⊆ I of configurations, and a natural number K. Output is an upper bound of K-th moment.

D.1 Linear Template-based Algorithm
Synthesis of a ranking supermartingale via reduction to an LP problem is discussed in [5,8]. We adapt this for our supermartingales. We first define some notions.
Definition D.1. Let V = {x 1 , . . . , x n } be a set of variables. A linear expression over V is a formula of a form a 1 x i1 + · · · + a n x in + b where a 1 , . . . , a n , b ∈ R and x i1 , . . . , x in ∈ V . We write R lin [V ] for the set of linear expressions. A linear inequality over V is a formula of a form ϕ ≥ 0 where ϕ is a linear expression. A linear conjunctive predicate is a conjunction ϕ 1 ≥ 0 ∧ · · · ∧ ϕ p ≥ 0 of linear constraints, and a linear predicate is a disjunction (ϕ 1,1 ≥ 0 ∧ · · · ∧ ϕ 1,p1 ≥ 0) ∨ · · · ∨ (ϕ q,1 ≥ 0 ∧ · · · ∧ ϕ q,pq ≥ 0) of linear conjunctive predicates. We define their semantics in the standard manner. For a pCFG Γ = (L, V, l init , x init , →, Up, Pr, G), a linear expression map (resp. linear predicate map) over Γ is a function that assigns a linear expression (resp. linear predicate) to each location of Γ . The semantics of the former is a function assigning a real number to each configuration, i.e. it has a type L × R V → R, and that of the latter is a set of configurations, i.e. a subset of L × R V . They are defined in the natural manners.
In the rest of this section, we describe a linear template-based synthesis algorithm for a pCFG Γ an invariant I, a set C ⊆ I of configurations, and a natural number K. We assume that the input satisfies the following conditions. Similar conditions were assumed in [5,8].
-For any l ∈ L A such that Up(l) = (x j , u), • if u ∈ B(R V , R), then u is represented by a linear expression over V ; • if u ∈ D(R), the expectation of u is known; and • if u ∈ B(R), then u is represented by a linear predicate φ over {x j }. -For any l ∈ L D and l ′ ∈ L, G(l, l ′ ) = p is represented by a linear predicate over V . -the invariant I is represented by a linear predicate map over Γ .
-the set C of terminal configurations is represented by a linear conjunctive predicate map.
a linear expression over U , to the objective function of the linear programming problem and ask the LP solver to minimize it.
A natural question would about the converse of the implication (10) ⇒ (9). The following theorem answers the question to some extent. Theorem D.3 (affine form of Farkas' lemma (see e.g. [29,Cor. 7.1h])). Let A ∈ R n×m , b ∈ R m , c ∈ R n and d ∈ R. If {x | Ax ≤ b} is not empty, the following two conditions are equivalent.
We note that if a pCFG Γ has no program variable (V = ∅) and all the transitions are probabilistic (that is, if Γ is a Markov chain), the above method gives the exact value of moments. It is because the LP problem has the following obvious optimal solution: η k (l) = (the k-th moment of runtimes from location l).

D.2 Polynomial Template-based Algorithm
We consider fixing a polynomial template for a supermartingale. The algorithm in this section is based on [7].
Definition D.4. Let V = {x 1 , . . . , x n } be a set of variables. A monomial is a formula of a form x d1 i1 . . . x dp ip . We call d 1 + · · · + d p a degree of the monomial, and write M ≤d for the set of monomials whose degrees are no greater than d. A polynomial expression (or simply a polynomial ) over V is a formula of a form a 1 m 1 + · · · + a q m q + b where a 1 , . . . , a q , b ∈ R and m 1 , . . . , m q are monomials. We write R[V ] for the set of polynomial expressions over V . The notions of polynomial inequality, polynomial conjunctive predicate, polynomial predicate, polynomial expression map and polynomial predicate map are defined in a similar manner to Def. D.1.
In the polynomial case, we assume that a pCFG Γ , an invariant I, a set C ⊆ I of configurations and a natural number K satisfy the following conditions. Assumption D.5.
-For any l ∈ L A , Up(l) = (x j , u), • if u ∈ B(R V , R), then u is represented by a polynomial expression over V • if u ∈ D(R), the K-th moment of u is known.
• if u ∈ B(R), then u is represented by a polynomial predicate φ over {x j }.
-For any l ∈ L D , l ′ , G(l, l ′ ) = p is represented by a polynomial predicate p over V . -the invariant I is represented by a polynomial predicate map over Γ .
-the set C of terminal configurations is represented by a polynomial conjunctive predicate map.
The polynomial template-based synthesis algorithm is similar to the linear template-based one. In the polynomial case, the user have to fix the maximum degree d of the polynomial template. The algorithm first fixes a d-degree polynomial template for a supermartingale. It is a family of formulas indexed by i ∈ {1, . . . , K} and l ∈ L that have the following form: Each a l m,i and b l i are newly added variables called parameters, and we write U for the set of all parameters. Our goal is to find a valuation U → R so that a K-tuple η 1 ( , x), . . . , η K ( , x) of polynomial expression maps is a ranking supermartingale for K-th moment (Definition 3.12).
Similarly to the linear case, the algorithm collects conditions on the parameters. It results in a conjunction of formulas of the following form: Here ✄ i ∈ {≥, >}, each ϕ i is a polynomial expression without parameters, and ψ is a polynomial formula over V whose coefficients are linear expressions over U . Relaxing the strict inequalities, we obtain the following: To reduce (12) to a form that is solvable by a numerical method, we can use the notion of sum-of-square polynomials [7]. A polynomial expression f is said to be sum-of-square (SOS) if there exist polynomial expressions g 1 , . . . , g l such that f = g 2 1 + · · · + g 2 l . Obviously, a sum-of-square polynomial is nonnegative. Therefore the following formula is a sufficient condition for (12): ∃ h w : sum-of-square w∈{0,1} m . ψ = w∈{0,1} m h w · ϕ w1 1 · · · · · ϕ wm m .
Here w i denotes the i-th component of w ∈ {0, 1} m . One of the reasons that sum-of-square is convenient is that it is characterized using the notion of positive semidefinite matrix.
Proposition D.6 (see e.g. [14]). A polynomial expression f over V is sumof-square if and only if there exist a vector y whose components are monomials over V and a positive semidefinite matrix A such that f = y T Ay.
⊓ ⊔ By the proposition above, existence of a valuation U → R of parameters and sum-of-square polynomials as in (13) can be reduced to a semidefinite programming (SDP) problem. Likewise the linear case, by setting a linear expression η K (l init , x init ) to the objective function, we can minimize it.
In the linear case, completeness was partially ensured by Farkas' lemma. In the polynomial case, the role is played by the following theorem.

E Test Programs
We have augmented the standard syntax of randomized program (see e.g. [9]) so that we can specify an invariant and a terminal configuration. To specify an invariant, we can use either of the following syntax.
1 { true } the best one. This "try many and pick the best" workflow is much like in [9]; it increases the computational cost, especially in the case a polynomial template is used (where a single synthesis procedure takes tens of seconds). -The second method corresponds precisely to the special case of our method where we restrict to the first moment. We argued that using higher moments is crucial in obtaining tighter bounds as the deadline becomes large, theoretically ( §4) and experimentally ( §6).

F.2 Comparison with [19]
In the predicate-transformer approach, the work [19] is the closest to ours, in that it studies variance of runtimes of randomized programs. The main differences are as follows: 1) computing tail probabilities is not pursued; 2) their extension from mean to variance involves an additional variable τ , which poses a challenge in automated synthesis as well as in generalization to even higher moments; and 3) they do not pursue automated analysis. Let us elaborate on the above point 2). In syntax-based static approaches to estimating variances or second moments, it is inevitable to simultaneously reason about both first and second moments. See Def. 3.3. In this work, we do so systematically by extending a notion of supermartingale into a vector-valued notion; this way our theory generalizes to moments higher than the second in a straight-forward manner. In contrast, in [19], an additional variable τ -which stands for the elapsed time-is used for mixing first and second moments.
Besides the problem of generalizing to higher moments, a main drawback of this approach in [19] is that the degrees of templates become bigger when it comes to automated synthesis. For example, due to the use of τ 2 in the condition forX in [19,Thm. 7], if the template for τ is of degree k, the template forX is necessarily of degree 2k or higher. One consequence is that a fully LP-based implementation of the approach of [19] becomes hard, while it is possible in the current work (see §6).
Let us also note that the work [19] focuses on precondition calculi and does not discuss automated synthesis or analysis.