Abstract
In this paper, we present the first fullyautomated expected amortised cost analysis of selfadjusting data structures, that is, of randomised splay trees, randomised splay heaps and randomised meldable heaps, which so far have only (semi)manually been analysed in the literature. Our analysis is stated as a typeandeffect system for a firstorder functional programming language with support for sampling over discrete distributions, nondeterministic choice and a ticking operator. The latter allows for the specification of finegrained cost models. We state two soundness theorems based on two different—but strongly related—typing rules of ticking, which account differently for the cost of nonterminating computations. Finally we provide a prototype implementation able to fully automatically analyse the aforementioned case studies.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
 amortised cost analysis
 functional programming
 probabilistic data structures
 automation
 constraint solving
1 Introduction
Probabilistic variants of wellknown computational models such as automata, Turing machines or the \(\lambda \)calculus have been studied since the early days of computer science (see [16, 17, 25] for early references). One of the main reasons for considering probabilistic models is that they often allow for the design of more efficient algorithms than their deterministic counterparts (see e.g. [6, 23, 25]). Another avenue for the design of efficient algorithms has been opened up by Sleator and Tarjan [34, 36] with their introduction of the notion of amortised complexity. Here, the cost of a single data structure operation is not analysed in isolation but as part of a sequence of data structure operations. This allows for the design of algorithms where the cost of an expensive operation is averaged out over multiple operations and results in a good overall worstcase cost. Both methodologies—probabilistic programming and amortised complexity—can be combined for the design of even more efficient algorithms, as for example in randomized splay trees [1], where a rotation in the splaying operation is only performed with some probability (which improves the overall performance by skipping some rotations while still guaranteeing that enough rotations are performed).
In this paper, we present the first fullyautomated expected amortised cost analysis of probabilistic data structures, that is, of randomised splay trees, randomised splay heaps, randomised meldable heaps and a randomised analysis of a binary search tree. These data structures have so far only (semi)manually been analysed in the literature. Our analysis is based on a novel typeandeffect system, which constitutes a generalisation of the type system studied in [14, 18] to the nondeterministic and probabilistic setting, as well as an extension of the type system introduced in [37] to sublinear bounds and nondeterminism. We provide a prototype implementation that is able to fully automatically analyse the case studies mentioned above. We summarise here the main contributions of our article: (i) We consider a firstorder functional programming language with support for sampling over discrete distributions, nondeterministic choice and a ticking operator, which allows for the specification of finegrained cost models. (ii) We introduce compact smallstep as well as bigstep semantics for our programming language. These semantics are equivalent wrt. the obtained normal forms (i.e., the resulting probability distributions) but differ wrt. the cost assigned to nonterminating computations. (iii) Based on [14, 18], we develop a novel typeandeffect system that strictly generalises the prior approaches from the literature. (iv) We state two soundness theorems (see Sect. 5.3) based on two different—but strongly related—typing rules of ticking. The two soundness theorems are stated wrt. the smallstep resp. bigstep semantics because these semantics precisely correspond to the respective ticking rule. The more restrictive ticking rule can be used to establish (positive) almost sure termination (AST), while the more permissive ticking rule supports the analysis of a larger set of programs (which can be very useful in case termination is not required or can be established by other means); in fact, the more permissive ticking rule is essential for the precise cost analysis of randomised splay trees. We note that the two ticking rules and corresponding soundness theorems do not depend on the details of the typeandeffect system, and we believe that they will be of independent interest (e.g., when adapting the framework of this paper to other benchmarks and cost functions). (v) Our prototype implementation \(\mathsf {ATLAS}\) strictly extends the earlier version reported on in [18], and all our earlier evaluation results can be replicated (and sometimes improved).
With our implementation and the obtained experimental results we make two contributions to the complexity analysis of data structures:

1.
We automatically infer bounds on the expected amortised cost, which could previously only be obtained by sophisticated penandpaper proofs. In particular, we verify that the amortised costs of randomised variants of selfadjusting data structures improve upon their nonrandomised variants. In Table 1 we state the expected cost of the randomised data structures considered and their deterministic counterparts; the benchmarks are detailed in Sect. 2.

2.
We establish a novel approach to the expected cost analysis of data structures. Our research has been greatly motivated by the detailed study of Albers et al. in [1] of the expected amortised costs of randomised splaying. While [1] requires a sophisticated penandpaper analysis, our approach allows us to fullyautomatically compare the effect of different rotation probabilities on the expected cost (see Table 2 of Sect. 6).
Related Work. The generalisation of the model of computation and the study of the expected resource usage of probabilistic programs has recently received increased attention (see e.g. [2, 4, 5, 7, 10, 11, 15, 21, 22, 24, 27, 37, 38]). We focus on related work concerned with automations of expected cost analysis of deterministic or nondeterministic, probabilistic programs—imperative or functional. (A probabilistic program is called nondeterministic, if it additionally makes use of nondeterministic choice.)
In recent years the automation of expected cost analysis of probabilistic data structures or programs has gained momentum, cf. [2,3,4,5, 22, 24, 27, 37, 38]. Notably, the Absynth prototype by [27], implement Kaminski’s \(\mathsf {ert}\)calculus, cf. [15] for reasoning about expected costs. Avanzini et al. [5] generalise the \(\mathsf {ert}\)calculus to an expected cost transformer and introduce the tool ecoimp, which provides a modular and thus a more efficient and scalable alternative for nondeterministic, probabilistic programs. In comparison to these works, we base our analysis on a dedicated type system finetuned to express sublinear bounds; further our prototype implementation \(\mathsf {ATLAS}\) derives bounds on the expected amortised costs. Neither is supported by Absynth or ecoimp.
Martingale based techniques have been implemented, e.g., by Peixin Wang et al. [38]. Related results have been reported by Moosbrugger et al. [24]. Meyer et al. [22] provide an extension of the KoAT tool, generalising the concept of alternating size and runtime analysis to probabilistic programs. Again, these innovative tools are not suited to the benchmarks considered in our work. With respect to probabilistic functional programs, Di Wang et al. [37] provided the only prior expected cost analysis of (deterministic) probabilistic programs; this work is most closely related to our contributions. Indeed, our typing rule \((\mathsf {{ite:coin}})\) stems from [37] and the soundness proof wrt. the bigstep semantics is conceptually similar. Nevertheless, our contributions strictly generalise their results. First, our core language is based on a simpler semantics, giving rise to cleaner formulations of our soundness theorems. Second, our typeandeffect provides two different typing rules for ticking, a fact we can capitalise on in additional strength of our prototype implementation. Finally, our amortised analysis allows for logarithmic potential functions.
A bulk of research concentrates on specific forms of martingales or Lyapunov ranking functions. All these works, however, are somewhat orthogonal to our contributions, as foremostly termination (i.e. AST or PAST) is studied, rather than computational complexity. Still these approaches can be partially suited to a variety of quantitative program properties, see [35] for an overview, but are incomparable in strength to the results established here.
Structure. In the next section, we provide a bird’s eye view on our approach. Sections 3 and 4 detail the core probabilistic language employed, as well as its small and bigstep semantics. In Sect. 5 we introduce the novel typeandeffect system formally and state soundness of the system wrt. the respective semantics. In Sect. 6 we present evaluation results of our prototype implementation \(\mathsf {ATLAS}\). Finally, we conclude in Sect. 7. All proofs, part of the benchmarks and the source codes are given in [19].
2 Overview of Our Approach and Results
In this section, we first sketch our approach on an introductory example and then detail the benchmarks and results depicted in Table 1 in the Introduction.
2.1 Introductory Example
Consider the definition of the function , depicted in Fig. 1. The expected amortised complexity of is \(\log _2({t} )\), where \({t} \) denotes the size of a tree t (defined as the number of leaves of the tree).^{Footnote 1} Our analysis is set up in terms of template potential functions with unknown coefficients, which will be instantiated by our analysis. Following [14, 18], our potential functions are composed of two types of resource functions, which can express logarithmic amortised cost: For a sequence of n trees \(t_{1}, \ldots , t_{n}\) and coefficients \(a_i \in {\mathbb {N}}, b \in {\mathbb {Z}}\), with \(\sum _{i=1}^n a_i + b \geqslant 0\), the resource function \(p_{\left( a_{1}, \ldots , a_{n}, b\right) }\left( t_{1}, \ldots , t_{n}\right) \mathrm {log}_{2}\left( a_{1} \cdot \left t_{1}\right +\cdots +a_{n} \cdot \left t_{n}\right +b\right) \) denotes the logarithm of a linear combination of the sizes of the trees. The resource function \(\mathsf {rk}(t)\), which is a variant of Schoenmakers’ potential, cf. [28, 31, 32], is inductively defined as (i) ; (ii) , where l, r are the left resp. right child of the tree , and d is some data element that is ignored by the resource function. (We note that \(\mathsf {rk}(t)\) is not needed for the analysis of but is needed for more involved benchmarks, e.g. randomised splay trees.) With these resource functions at hand, our analysis introduces the coefficients \(q_*\), \(q_{(1,0)}\), \(q_{(0,2)}\), \(q'_*\), \(q'_{(1,0)}\), \(q'_{(0,2)}\) and employs the following Ansatz:^{Footnote 2}
Here, denotes the expected cost of executing on tree t, where the cost is given by the ticks as indicated in the source code (each tick accounts for a recursive call). The result of our analysis will be an instantiation of the coefficients, returning \(q_{(1,0)} = 1\) and zero for all other coefficients, which allows to directly read off the logarithmic bound \(\log _2({t} )\) of .
Our analysis is formulated as a typeandeffect system, introducing the above template potential functions for every subexpression of the program under analysis. The typing rules of our system give rise to a constraint system over the unknown coefficients that capture the relationship between the potential functions of the subexpressions of the program. Solving the constraint system then gives a valid instantiation of the potential function coefficients. Our typeandeffect system constitutes a generalisation of the type system studied in [14, 18] to the nondeterministic and probabilistic setting, as well as an extension of the type system introduced in [37] to sublinear bounds and nondeterminism.
In the following, we survey our typeandeffect system by means of example . A partial type derivation is given in Fig. 2. For brevity, type judgements and the type rules are presented in a simplified form. In particular, we restrict our attention to tree types, denoted as \(\mathsf {T}\). This omission is inessential to the actual complexity analysis. For the full set of rules see [19]. We now discuss this type derivation step by step.
Let e denote the body of the function definition of , cf. Fig. 1. Our automated analysis infers an annotated type by verifying that the type judgement \({t}{:}{\mathsf {T}}{\mid }Q\,\vdash \,{e}{:}{\mathsf {T}}{\mid }Q'\) is derivable. Types are decorated with annotations \(Q \mathrel {:=}[q_*,q_{(1,0)},q_{(0,2)}]\) and \(Q' \mathrel {:=}[q'_*,q'_{(1,0)},q'_{(0,2)}]\)—employed to express the potential carried by the arguments to and its results. Annotations fix the coefficients of the resource functions in the corresponding potential functions, e.g., (i) \(\varPhi ({{t}{:}{\mathsf {T}}} {\mid } {Q}) \mathrel {:=}q_*\cdot \mathsf {rk}(t) + q_{(1,0)} \cdot p_{(1,0)}(t) + q_{(0,2)}\cdot p_{(0,2)}(t)\) and (ii) \(\varPhi ({{e}{:}{\mathsf {T}}} {\mid } {Q'}) \mathrel {:=}q'_*\cdot \mathsf {rk}(e) + q'_{(1,0)} \cdot p_{(1,0)}(e) + q'_{(0,2)} \cdot p_{(0,2)}(e)\).
By our soundness theorems (see Sect. 5.3), such a typing guarantees that the expected amortised cost of is bounded by the expectation (wrt. the value distribution in the limit) of the difference between \(\varPhi ({{t}{:}{\mathsf {T}}} {\mid } {Q})\) and . Because e is a expression, the following rule is applied (we only state a restricted rule here, the general rule can be found in [19]):
Here \(e_1\) denotes the subexpression of e that corresponds to the case of . Apart from the annotations Q, \(Q_1\) and \(Q'\), the rule \((\mathsf {{match}})\) constitutes a standard type rule for pattern matching. With regard to the annotations Q and \(Q_1\), \((\mathsf {{match}})\) ensures the correct distribution of potential by inducing the constraints
where the constraints are immediately justified by recalling the definitions of the resource functions \(p_{\left( a_{1}, \ldots , a_{n}, b\right) }\left( t_{1}, \ldots , t_{n}\right) :=\mathrm {log} _{2}\left( a_{1} \cdot \left t_{1}\right +\cdots +a_{n} \cdot \left t_{n}\right +b\right) \) and \(\mathsf {rk}(t) = \mathsf {rk}(l) + \log _2({l} ) + \log _2({r} ) + \mathsf {rk}(r)\).
The next rule is a structural rule, representing a weakening step that rewrites the annotations of the variable context. The rule \((\mathsf {{w}})\) allows a suitable adaptation of the coefficients based on the following inequality, which holds for any substitution \(\sigma \) of variables by values, \(\varPhi ({\sigma };{{l}{:}{\mathsf {T}}, {r}{:}{\mathsf {T}}} {\mid } {Q_1}) \geqslant \varPhi ({\sigma };{{l}{:}{\mathsf {T}}, {r}{:}{\mathsf {T}}} {\mid } {Q_2})\).
In our prototype implementation this comparison is performed symbolically. We use a variant of Farkas’ Lemma [19, 33] in conjunction with simple mathematical facts about the logarithm to linearise this symbolic comparison, namely the monotonicity of the logarithm and the fact that \(2 + \log _2(x) + \log _2(y) \leqslant 2\log _2(x+y)\) for all \(x,y \geqslant 1\). For example, Farkas’ Lemma in conjunction with the latter fact gives rise to
for some fresh rational coefficient \(f \geqslant 0\) introduced by Farkas’ Lemma. After having generated the constraint system for , the solver is free to instantiate f as needed. In fact in order to discover the bound \(\log _2({t} )\) for , the solver will need to instantiate , corresponding to the inequality .
So far, the rules did not refer to sampling and are unchanged from their (nonprobabilistic) counterpart introduced in [14, 18]. The next rule, however, formalises a coin toss, biased with probability p. Our general rule \((\mathsf {{ite:coin}})\) is depicted in Fig. 12 and is inspired by a similar rule for coin tosses that has been recently been proposed in the literature, cf. [37]. This rule specialises as follows to our introductory example:
Here \(e_2\) and \(e_3\) respectively, denote the subexpressions of the conditional and in addition the crucial condition holds. This condition, expressing that the corresponding annotations are subject to the probability of the coin toss, gives rise to the following constraints (among others)
In the following, we will only consider one alternative of the coin toss and proceed as in the partial type derivation depicted in Fig. 1 (ie. we state the branch and omit the symmetric branch). Thus next, we apply the rule for the expression. This rule is the most involved typing rule in the system proposed in [14, 18]. However, for our leading example it suffices to consider the following simplified variant:
Focusing on the annotations, the rule \((\mathsf {{let:tree}})\) suitably distributes potential assigned to the variable context, embodied in the annotation \(Q_3\), to the recursive call within the expression (via annotation \(Q_4\)) and the construction of the resulting tree (via annotation \(Q_7\)). The distribution of potential is facilitated by generating constraints that can roughly be stated as two “equalities”, that is, (i) “\(Q_3 = Q_4 + D\)”, and (ii) “\(Q_7 = D + Q_6\)”. Equality (i) states that the input potential is split into some potential \(Q_4\) used for typing and some remainder potential D (which however is not constructed explicitly and only serves as a placeholder for potential that will be passed on). Equality (ii) states that the potential \(Q_7\) used for typing equals the remainder potential D plus the leftover potential \(Q_6\) from the typing of . The \((\mathsf {{tick:now}})\) rule then ensures that costs are properly accounted for by generating constraints for \(Q_4 = Q_5 + 1\) (see Fig. 2). Finally, the type derivation ends by the application rule, denoted as \((\mathsf {{app}})\), that verifies that the recursive call is welltyped wrt. the (annotated) signature of the function , ie. the rule enforces that \(Q_5 = Q\) and \(Q_6 = Q'\). We illustrate (a subset of) the constraints induced by \((\mathsf {{let}})\), \((\mathsf {{tick:now}})\) and \((\mathsf {{app}})\):
where (i) the constraints in the first three columns—involving the annotations \(Q_3\), \(Q_4\), \(Q_6\) and \(Q_7\)—stem from the constraints of the rule \((\mathsf {{let:tree}})\); (ii) the constraints in the last column—involving \(Q_4\), \(Q_5\), Q and \(Q'\)—stem from the constraints of the rule \((\mathsf {{tick:now}})\) and \((\mathsf {{app}})\). For example, \(q^3_{(1,0,0)} = q^4_{(1,0)}\) and \(q^3_{(0,1,0)} = q^7_{(0,1,0)}\) distributes the part of the logarithmic potential represented by \(Q_3\) to \(Q_4\) and \(Q_7\); \(q^6_1 = q^7_1\) expresses that the rank of the result of evaluating the recursive call can be employed in the construction of the resulting tree ; \(q^4_{(1,0)} = q^5_{(1,0)}\) and \(q^4_{(0,2)} = q^5_{(0,2)} + 1\) relate the logarithmic resp. constant potential according to the tick rule, where the addition of one accounts for the cost embodied by the tick rule; \(q^5_{(1,0)} = q_{(1,0)}\) stipulates that the potential at the recursive call site must match the function type.
Our prototype implementation \(\mathsf {ATLAS}\) collects all these constraints and solves them fully automatically. Following [14, 18], our implementation in fact searches for a solution that minimises the resulting complexity bound. For the function, our implementation finds a solution that sets \(q_{(1,0)}\) to 1, and all other coefficients to zero. Thus, the logarithmic bound \(\log _2({t} )\) follows.
2.2 Overview of Benchmarks and Results
Randomised Meldable Heaps. Gambin et al. [13] proposed meldable heaps as a simple priorityqueue data structure that is guaranteed to have expected logarithmic cost for all operations. All operations can be implemented in terms of the function, which takes two heaps and returns a single heap as a result. The partial source code of is given in Fig. 3 (the full source code of all examples can be found in [19]). Our tool \(\mathsf {ATLAS}\) fullyautomatically infers the bound \(\log _2({h_1} ) + \log _2({h_2} )\) on the expected cost of .
Randomised Splay Trees. Albers et al. in [1] propose these splay trees as a variation of deterministic splay trees [34], which have better expected runtime complexity (the same computational complexity in the Onotation but with smaller constants). Related results have been obtained by Fürer [12]. The proposal is based on the observation that it is not necessary to rotate the tree in every (recursive) splaying operation but that it suffices to perform rotations with some fixed positive probability in order to reap the asymptotic benefits of selfadjusting search trees. The theoretical analysis of randomised splay trees [1] starts by refining the cost model of [34], which simply counts the number of rotations, into one that accounts for recursive calls with a cost of c and for rotations with a cost of d.
We present a snippet of a functional implementation of randomised splay trees in Fig. 4. We note that in this code snippet we have set ; this choice is arbitrary; we have chosen these costs in order to be able to compare the resulting amortised costs to the deterministic setting of [18], where the combined cost of the recursive call and rotation is set to 1; we note that our analysis requires fixed costs c and d but these constants can be chosen by the user; for example one can set \(c=1\) and \(d=2.75\) corresponding to the costs observed during the experiments in [1]. Likewise the probability of the coin toss has been arbitrarily set to but could be set differently by the user. (We remark that to the best of our knowledge no theoretical analysis has been conducted on how to chose the best value of p for given costs c and d.) Our prototype implementation is able to fully automatically infer an amortised complexity bound of for (with c, d and p fixed as above), which improves on the complexity bound of for the deterministic version of as reported in [18], confirming that randomisation indeed improves the expected runtime.
We remark on how the amortised complexity bound of for is computed by our analysis. Our tool \(\mathsf {ATLAS}\) computes an annotated type for that corresponds to the inequality
By setting as potential function in the sense of Tarjan and Sleator [34, 36], the above inequality allows us to directly read out an upper bound on the amortised complexity of (we recall that the amortised complexity in the sense of Tarjan and Sleator is defined as the sum of the actual costs plus the output potential minus the input potential):
Probabilistic Analysis of Binary Search Trees. We present a probabilistic analysis of a deterministic binary search tree, which offers the usual , , and operations, where uses given in Fig. 6, as a subroutine (the source code of the missing operations is given in [19]). We assume that the elements inserted, deleted and searched for are equally distributed; hence, we conduct a probabilistic analysis by replacing every comparison with a coin toss of probability one half. We will refer to the resulting data structure as Coin Search Tree in our benchmarks. The source code of is given in Fig. 5.
Our tool \(\mathsf {ATLAS}\) infers an logarithmic expected amortised cost for all operations, ie., for and we obtain (i) ; and (ii) , from which we obtain an expected amortised cost of and respectively.
3 Probabilistic Functional Language
Preliminaries. Let \({\mathbb {R}^+_0}\) denote the nonnegative reals and \({\mathbb {R}^{+\infty }_0}\) their extension by \(\infty \). We are only concerned with discrete distributions and drop “discrete” in the following. Let A be a countable set and let \(\mathsf {D}(A)\) denote the set of (sub)distributions d over A, whose support \(\mathsf {supp}(\mu ) \mathrel {:=}\{a \in A \mid \mu (a) \not = 0\}\) is countable. Distributions are denoted by Greek letters. For \(\mu \in \mathsf {D}(A)\), we may write \(\mu = \{a^{p_i}_i\}_{i \in I}\), assigning probabilities \(p_i\) to \(a_i \in A\) for every \(i \in I\), where I is a suitable chosen index set. We set \({\mu }  \mathrel {:=}\sum _{i \in I} p_i\). If the support is finite, we simply write \(\mu = \{a^{p_1}_1,\dots ,a^{p_n}_n\}\) The expected value of a function \(f :A \rightarrow {\mathbb {R}^+_0}\) on \(\mu \in \mathsf {D}(A)\) is defined as \(\mathbb {E}_{\mu }(f) \mathrel {:=}\sum _{a \in \mathsf {supp}(\mu )} \mu (a) \cdot f(a)\). Further, we denote by \(\sum _{i \in I} p_i \cdot \mu _i\) the convex combination of distributions \(\mu _i\), where \(\sum _{i\in I} p_i \leqslant 1\). As by assumption \(\sum _{i\in I} p_i \leqslant 1\), \(\sum _{i \in I} p_i \cdot \mu _i\) is always a (sub)distribution.
In the following, we also employ a slight extension of (discrete) distributions, dubbed multidistributions [4]. Multidistributions are countable multisets \(\{{a_i^{p_i} }\}_{i \in I}\) over pairs \(p_i :a_i\) of probabilities \(0 < p_i \leqslant 1\) and objects \(a_i \in A\) with \(\sum _{i \in I} p_i \leqslant 1\). (For ease of presentation, we do not distinguish notationally between sets and multisets.) Multidistributions over objects A are denoted by \(\mathsf {M}(A)\). For a multidistribution \(\mu \in \mathsf {M}(A)\) the induced distribution \(\overline{\mu } \in \mathsf {D}(A)\) is defined in the obvious way by summing up the probabilities of equal objects.
Syntax. In Fig. 7, we detail the syntax of our core probabilistic (firstorder) programming language. With the exception of ticks, expressions are given in to simplify the presentation of the operational semantics and the typing rules. In order to ease the readability, we make use of mild syntactic sugaring in the presentation of actual code (as we already did above).
To make the presentation more succinct, we assume only the following types: a set of base types \(\mathcal {B}\) such as Booleans , integers \(\mathsf {Int}\), or rationals \(\mathsf {Rat}\), product types, and binary trees \(\mathsf {T}\), whose internal nodes are labelled with elements \({b}{:}{\mathsf {B}}\), where \(\mathsf {B}\) denotes an arbitrary base type. Values are either of base types, trees or pairs of values. We use lowercase Greek letters (from the beginning of the alphabet) for the denotation of types. Elements \({t}{:}{\mathsf {T}}\) are defined by the following grammar which fixes notation. . The size of a tree is the number of leaves: , .
We skip the standard definition of integer constants \(n \in {\mathbb {Z}}\) as well as variable declarations, cf. [29]. Furthermore, we omit binary operators with the exception of essential comparisons. As mentioned, to represent sampling we make use of a dedicated  expression, whose guard evaluates to depending on a coin toss with fixed probability. Further, nondeterministic choice is similarly rendered via an  expression. Moreover, we make use of ticking, denoted by an operator to annotate costs, where a, b are optional and default to one. Following Avanzini et al. [2], we represent ticking as an operation, rather than in , as in Wang et al. [37]. (This allows us to suit a bigstep semantics that only accumulates the cost of terminating expressions.) The set of all expressions is denoted \(\mathcal {E}\).
A typing context is a mapping from variables \(\mathcal {V}\) to types. Type contexts are denoted by uppercase Greek letters, and the empty context is denoted \(\varepsilon \). A program \(\mathsf {P}\) consists of a signature \(\mathcal {F}\) together with a set of function definitions of the form \(f~x_1~\dots ~x_n = e_f\), where the \(x_i\) are variables and \(e_f\) an expression. When considering some expression e that includes function calls we will always assume that these function calls are defined by some program \(\mathsf {P}\). A substitution or (environment) \(\sigma \) is a mapping from variables to values that respects types. Substitutions are denoted as sets of assignments: \(\sigma =\left\{ x_{1} \mapsto t_{1}, \ldots , x_{n} \mapsto t_{n}\right\} \). We write \(\mathsf {dom}(\sigma )\) to denote the domain of \(\sigma \).
4 Operational Semantics
SmallStep Semantics. The smallstep semantics is formalised as a (weighted) nondeterministic, probabilistic abstract reduction system [4, 9] over \(\mathsf {M}(\mathcal {E})\). In this way (expected) cost, nondeterminism and probabilistic sampling are taken care of. Informally, a probabilistic abstract reduction system is a transition systems where reducts are chosen from a probability distribution. A reduction wrt. such a system is then given by a stochastic process [9], or equivalently, as a reduction relation over multidistributions [4], which arise naturally in the context of nondeterminism (we refer the reader to [4] for an example that illustrates the advantage of multidistributions in the presence of nondeterminism).
Following [5], we equip transitions with (positive) weights, amounting to the cost of the transition. Formally, a (weighted) Probabilistic Abstract Reduction System (PARS) on a countable set A is a ternary relation \(\cdot {\mathop {\mathrel {\mapsto }}\limits ^{\cdot }} \cdot \subseteq A \times {\mathbb {R}^+_0}\times \mathsf {D}(A)\). For \(a \in A\), a rule \(a {\mathop {\mathrel {\mapsto }}\limits ^{c}} \{ b^{\mu (b)} \}_{b \in A}\) indicates that a reduces to b with probability \(\mu (b)\) and cost \(c \in {\mathbb {R}^+_0}\). Note that any righthandside of a PARS is supposed to be a full distribution, ie. the probabilities in \(\mu \) sum up to 1. Given two objects a and b, \(a {\mathop {\mathrel {\mapsto }}\limits ^{c}} \{ b^1 \}\) will be written \(a {\mathop {\mathrel {\mapsto }}\limits ^{c}} b\) for brevity. An object \(a \in A\) is called terminal if there is no rule \(a {\mathop {\mathrel {\mapsto }}\limits ^{c}} \mu \), denoted . We suit the onestep reduction relation \(\mathrel {\mapsto }\) given in Fig. 8 as a (nondeterministic) PARS over multidistributions. As above, we sometimes identify Dirac distributions \(\{e^1\}\) with e. Evaluation contexts are formed by expressions, as in the following grammar: . We denote with \(\mathbb {C}[e]\) the result of substitution the empty context \(\Box \) with expression e. Contexts are exploited to lift the onestep reduction to a ternary weighted reduction relation \({{\mathop {\longrightarrow }\limits ^{\cdot }}} \subseteq {\mathsf {M}(\mathcal {E}) \times {\mathbb {R}^{+\infty }_0}\times \mathsf {M}(\mathcal {E})}\), cf. Fig. 9. (In (Conv), \(\biguplus \) refers to the usual notion of multiset union.)
The relation \({\mathop {\longrightarrow }\limits ^{\cdot }}\) constitutes the operational (smallstep) semantics of our simple probabilistic function language. Thus \(\mu {\mathop {\longrightarrow }\limits ^{c}} \nu \) states that the submultidistribution of objects \(\mu \) evolves to a submultidistribution of reducts \(\nu \) in one step, with an expected cost of c. Note that since \({\mathop {\mathrel {\mapsto }}\limits ^{\cdot }}\) is nondeterministic, so is the reduction relation \({\mathop {\longrightarrow }\limits ^{\cdot }}\). We now define the evaluation of an expression \(e \in \mathcal {E}\) wrt. to the smallstep relation \({\mathop {\longrightarrow }\limits ^{\cdot }}\): We set \(e {\mathop {\longrightarrow }\limits ^{c}}_\infty \mu \), if there is a (possibly infinite) sequence \(\{{e^1}\} {\mathop {\longrightarrow }\limits ^{c_1}} \mu _1 {\mathop {\longrightarrow }\limits ^{c_2}} \mu _2 {\mathop {\longrightarrow }\limits ^{c_3}} \dots \) with \(c = \sum _{n \geqslant } c_n\) and \(\mu = \lim _{n \rightarrow \infty } {\overline{\mu _n}}\!{\restriction _{V}}\), where \({\overline{\mu _n}}\!{\restriction _{V}}\) denotes the restriction of the distribution \(\overline{\mu _n}\) (induced by the multidistribution \(\mu _n\)) to a (sub)distribution over values. Note that the \({\overline{\mu _n}}\!{\restriction _{V}}\) form a CPO wrt. the pointwise ordering, cf. [39]. Hence, the fixed point \(\mu =\lim _{n \rightarrow \infty } {\overline{\mu _n}}\!{\restriction _{V}}\) exists. We also write \(e {\mathop {\longrightarrow }\limits ^{}}_\infty \mu \) in case the cost of the evaluation is not important.
(Positive) Almost Sure Termination. A program \(\mathsf {P}\) is almost surely terminating (AST) if for any substitution \(\sigma \), and any evaluation \(e\sigma {\mathop {\longrightarrow }\limits ^{}}_\infty \mu \), we have that \(\mu \) forms a full distribution. For the definition of positive almost sure termination we assume that every statement of \(\mathsf {P}\) is enclosed in an ticking operation with cost one; we note that such a cost models the length of the computation. We say \(\mathsf {P}\) is positively almost surely terminating (PAST), if for any substitution \(\sigma \), and any evaluation \(e\sigma {\mathop {\longrightarrow }\limits ^{c}}_\infty \mu \), we have \(c < \infty \). It is well known that PAST implies AST, cf. [9].
BigStep Semantics. We now define the aforementioned bigstep semantics. We first define approximate judgments , see Fig. 10, which say that in derivation trees with depth up to n the expression e evaluates to a subdistribution \(\mu \) over values with cost c. We now consider the cost \(c_n\) and subdistribution \(\mu _n\) in for \(n \rightarrow \infty \). Note that the subdistributions \(\mu _n\) in form a CPO wrt. the pointwise ordering, cf. [39]. Hence, there exists a fixed point \(\mu = \lim _{n \rightarrow \infty } \mu _n\). Moreover, we set \(c = \lim _{n \rightarrow \infty } c_n\) (note that either \(c_n\) converges to some real \(c \in {\mathbb {R}^{+\infty }_0}\) or we have \(c = \infty \)). We now define the bigstep judgments by setting \(\mu = \lim _{n \rightarrow \infty } \mu _n\) and \(c = \lim _{n \rightarrow \infty } c_n\) for . We want to emphasise that the cost c in only counts the ticks on terminating computations.
Theorem 1
(Equivalence). Let \(\mathsf {P}\) be a program and \(\sigma \) a substitution. Then, (i) implies that \(e\sigma {\mathop {\longrightarrow }\limits ^{c'}}_\infty \mu \) for some \(c' \ge c\), and (ii) \(e\sigma {\mathop {\longrightarrow }\limits ^{c}}_\infty \mu \) implies that for some \(c' \leqslant c\). Moreover, if \(e\sigma \) almostsurely terminates, we can choose \(c=c'\) in both cases.
The provided operational bigstep semantics generalises the (bigstep) semantics given in [18]. Further, while partly motivated by bigstep semantics introduced in [37], our bigstep semantics is technically incomparable—due to a different representation of ticking—while providing additional expressivity.
5 TypeandEffect System for Expected Cost Analysis
5.1 Resource Functions
In Sect. 2, we introduced a variant of Schoenmakers’ potential function, denoted as \(\mathsf {rk}(t)\), and the additional potential functions \(p_{\left( a_{1}, \ldots , a_{n}, b\right) }\left( t_{1}, \ldots , t_{n}\right) =\mathrm {log} _{2}\left( a_{1} \cdot \left t_{1}\right +\cdots +a_{n} \cdot \left t_{n}\right +b\right) \), denoting the \(\log _2\) of a linear combination of tree sizes. We demand \(\sum _{i=1}^n a_i + b \geqslant 0\) (\(a_i \in {\mathbb {N}}, b \in {\mathbb {Z}}\)) for welldefinedness of the latter; \(\log _2\) denotes the logarithm to the base 2. Throughout the paper we stipulate \(\log _2(0) \mathrel {:=}0\) in order to avoid case distinctions. Note that the constant function 1 is representable: \(1 = \lambda t. \log _2(0 \cdot {t} +2) = p_{(0, 2)}\). We are now ready to state the resource annotation of a sequence of trees.
Definition 1
A resource annotation or simply annotation of length m is a sequence \(Q=\left[ q_{1}, \ldots , q_{m}\right] \cup \left[ \left( q_{\left( a_{1}, \ldots , a_{m}, b\right) }\right) a_{i}, b \in \mathbb {N}\right] \), vanishing almost everywhere. The length of Q is denoted Q. The empty annotation, that is, the annotation where all coefficients are set to zero, is denoted as \(\varnothing \). Let \(t_{1}, \ldots , t_{m}\) be a sequence of trees. Then, the potential of \(t_{m}, \ldots , t_{n}\) wrt. Q is given by
In case of an annotation of length 1, we sometimes write \(q_*\) instead of \(q_1\). We may also write \(\varPhi ({{v}{:}{\alpha }} {\mid } {Q})\) for the potential of a value of type \(\alpha \) annotated with Q. Both notations were already used above. Note that only values of tree type are assigned a potential. We use the convention that the sequence elements of resource annotations are denoted by the lowercase letter of the annotation, potentially with corresponding sub or superscripts.
Example 1
Let t be a tree. To model its potential as \(\log _2({t} )\) in according to Definition 1, we simply set \(q_{(1,0)} \mathrel {:=}1\) and thus obtain \(\varPhi ({t} {\mid } {Q}) = \log _2({t} )\), which describes the potential associated to the input tree t of our leading example above. \(\square \)
Let \(\sigma \) be a substitution, let \(\varGamma \) denote a typing context and let \(x_{1}: \top , \ldots , x_{n}: \top \) denote all tree types in \(\varGamma \). A resource annotation for \(\varGamma \) or simply annotation is an annotation for the sequence of trees \(x_{1} \sigma , \ldots , x_{n} \sigma \). We define the potential of the annotated context \(\varGamma {\mid } Q\) wrt. a substitution \(\sigma \) as \(\varPhi (\sigma ; \varGamma \mid Q):=\varPhi \left( x_{1} \sigma , \ldots , x_{n} \sigma \mid Q\right) \).
Definition 2
An annotated signature \(\mathcal {F}\) maps functions f to sets of pairs of annotated types for the arguments and the annotated type of the result:
We suppose f takes n arguments of which m are trees; \(m \leqslant n\) by definition. Similarly, the return type may be the product \(\beta _{1} \times \cdots \times \beta _{i}\). In this case, we demand that at most one \(\beta _i\) is a tree type.^{Footnote 3}
Instead of \(\alpha _{1} \times \cdots \times \alpha _{n}\left Q \rightarrow \beta _{1} \times \cdots \times \beta _{k}\right Q^{\prime } \in \mathcal {F}(f)\), we sometimes succinctly write \({f}{:}{{\alpha } {\mid } {Q} \rightarrow {\beta } {\mid } {Q'}}\) where \(\alpha \), \(\beta \) denote the product types \(\alpha _{1} \times \cdots \times \alpha _{n}\), \(\beta _{1} \times \cdots \times \beta _{k}\), respectively. It is tacitly understood that the above syntactic restrictions on the length of the annotations Q, \(Q'\) are fulfilled. For every function f, we also consider its costfree variant from which all ticks have been removed. We collect the costfree signatures of all functions in the set \(\mathcal {F}^{\text {cf}}\).
Example 2
Consider the function depicted in Fig. 2. Its signature is formally represented as \({\mathsf {T}} {\mid } {Q} \rightarrow {\mathsf {T}} {\mid } {Q'}\), where \(Q \mathrel {:=}[q_*] \cup [(q_{(a,b)})_{a,b \in {\mathbb {Z}}}]\) and \(Q' \mathrel {:=}[q'_*] \cup [(q'_{(a,b)})_{a,b \in {\mathbb {Z}}}]\). We leave it to the reader to specify the coefficients in Q, \(Q'\) so that the rule \((\mathsf {{app}})\) as depicted in Sect. 2 can indeed be employed to type the recursive call of .
Let \(Q = [q_*] \cup [(q_{(a,b)})_{a,b \in {\mathbb {N}}}]\) be an annotation and let K be a rational such that \(q_{(0,2)}+ K \geqslant 0\). Then, \(Q' \mathrel {:=}Q + K\) is defined as follows: \(Q' = [q_*] \cup [(q'_{(a,b)})_{a,b \in {\mathbb {N}}}]\), where \(q'_{(0,2)} \mathrel {:=}q_{(0,2)} +K\) and for all \((a,b) \not = (0,2)\) \(q_{(a,b)}' \mathrel {:=}q_{(a,b)}\). Recall that \(q_{(0,2)}\) is the coefficient of function \(p_{(0,2)}(t) = \log _2(0{t}  + 2) = 1\), so the annotation \(Q+K\) increments or decrements cost from the potential induced by Q by \({K} \), respectively. Further, we define the multiplication of an annotation Q by a constant K, denoted as \(K \cdot Q\) pointwise. Moreover, let \(P = [p_*] \cup [(p_{(a,b)})_{a,b \in {\mathbb {N}}}]\) be another annotation. Then the addition \(P+Q\) of annotations P, Q is similarly defined pointwise.
5.2 Typing Rules
The nonprobabilistic part of the type system is given in [19]. In contrast to the type system employed in [14, 18], the cost model is not fixed but controlled by the ticking operator. Hence, the corresponding application rule \((\mathsf {{app}})\) has been adapted. Costing of evaluation is now handled by a dedicated ticking operator, cf. Fig. 11. In Fig. 12, we give the rule \((\mathsf {{ite:coin}})\) responsible for typing probabilistic conditionals.
We remark that the core type system, that is, the type system given by Fig. 12 together with the remaining rules [19], ignoring annotations, enjoys subject reduction and progress in the following sense, which is straightforward to verify.
Lemma 1
Let e be such that \({e}{:}{\alpha }\) holds. Then: (i) If \(e {\mathop {\mathrel {\mapsto }}\limits ^{c}} \{ e^{p_i}_i \}_{i \in I}\), then \({e_i}{:}{\alpha }\) holds for all \(i \in I\). (ii) The expression e is in normal form wrt. \({\mathop {\mathrel {\mapsto }}\limits ^{c}}\) iff e is a value.
5.3 Soundness Theorems
A program \(\mathsf {P}\) is called welltyped if for any definition \(f\left( x_{1}, \ldots , x_{n}\right) =e \in \mathrm {P}\) and any annotated signature \(f: \alpha _{1} \times \cdots \times \alpha _{n}Q \rightarrow \beta  Q^{\prime }\), we have a corresponding typing \({x_1}{:}{\alpha _1},\dots ,{x_k}{:}{\alpha _k}{\mid }Q\,\vdash \,{e}{:}{\beta }{\mid }Q'\). A program \(\mathsf {P}\) is called costfree welltyped, if the costfree typing relation is used (which employs the costfree signatures of all functions).
Theorem 2
(Soundness Theorem for \((\mathsf {{tick:now}})\)). Let \(\mathsf {P}\) be welltyped. Suppose \(\varGamma {\mid }Q\,\vdash \,{e}{:}{\alpha }{\mid }Q'\) and \(e\sigma {\mathop {\longrightarrow }\limits ^{c}}_\infty \mu \). Then \(\varPhi ({\sigma };{\varGamma } {\mid } {Q}) \geqslant c + \mathbb {E}_{\mu }(\lambda v. \varPhi ({v} {\mid } {Q'}))\). Further, if \({\varGamma } {\mid } {Q} \vdash ^{\text {cf}} {e}{:}{\alpha } {\mid } {Q'}\), then \(\varPhi ({\sigma };{\varGamma } {\mid } {Q}) \geqslant \mathbb {E}_{\mu }(\lambda v. \varPhi ({v} {\mid } {Q'}))\).
Corollary 1
Let \(\mathsf {P}\) be a welltyped program such that ticking accounts for all evaluation steps. Suppose \(\varGamma {\mid }Q\,\vdash \,{e}{:}{\alpha }{\mid }Q'\). Then e is positive almost surely terminating (and thus in particular almost surely terminating).
Theorem 3
(Soundness Theorem for \((\mathsf {{tick:defer}})\)). Let \(\mathsf {P}\) be welltyped. Suppose \(\varGamma {\mid }Q\,\vdash \,{e}{:}{\alpha }{\mid }Q'\) and . Then, we have \(\varPhi ({\sigma };{\varGamma } {\mid } {Q}) \geqslant c + \mathbb {E}_{\mu }(\lambda v. \varPhi ({v} {\mid } {Q'}))\). Further, if \({\varGamma } {\mid } {Q} \vdash ^{\text {cf}} {e}{:}{\alpha } {\mid } {Q'}\), then \(\varPhi ({\sigma };{\varGamma } {\mid } {Q}) \geqslant \mathbb {E}_{\mu }(\lambda v. \varPhi ({v} {\mid } {Q'}))\).
We comment on the tradeoffs between Theorems 2 and 3. As stated in Corollary 1 the benefit of Theorem 2 is that when every recursive call is accounted for by a tick, then a type derivation implies the termination of the program under analysis. The same does not hold for Theorem 3. However, Theorem 3 allows to type more programs than Theorem 2, which is due to the fact that \((\mathsf {{tick:defer}})\) rule is more permissive than \((\mathsf {{tick:now}})\). This proves very useful, in case termination is not required (or can be established by other means).
We exemplify this difference on the function, see Fig. 13. Theorem 3 supports the derivation of the type , while Theorem 2 does not. This is due to the fact that potential can be “borrowed” with Theorem 3. To wit, from the potential \(\mathsf {rk}(t) + \log _2({t} ) + 1\) for one can derive the potential \(\mathsf {rk}(l') + \mathsf {rk}(r')\) for the intermediate context after both letexpression (note there is no +1 in this context, because the +1 has been used to pay for the ticks around the recursive calls). Afterwards one can restore the +1 by weakening \(\mathsf {rk}(l') + \mathsf {rk}(r')\) to (using in addition that \(\mathsf {rk}(t) \geqslant 1\) for all trees t). On the other hand, we cannot “borrow” with Theorem 2 because the rule \((\mathsf {{tick:now}})\) forces to pay the +1 for the recursive call immediately (but there is not enough potential to pay for this). In the same way, the application of rule \((\mathsf {{tick:defer}})\) and Theorem 3 is essential to establish the logarithmic amortised costs of randomised splay trees. (We note that the termination of as well as of is easy to establish by other means: it suffices to observe that recursive calls are on subtrees of the input tree).
6 Implementation and Evaluation
Implementation. Our prototype \(\mathsf {ATLAS}\) is an extension of the tool described in [18]. In particular, we rely on the preprocessing steps and the implementation of the weakening rule as reported in [18] (which makes use of Farkas’ Lemma in conjunction with selected mathematical facts about the logarithm as mentioned above). We only use the fullyautomated mode reported in [18]. We have adapted the generation of the constraint system to the rules presented in this paper. We rely on Z3 [26] for solving the generated constraints. We use the optimisation heuristics of [18] for steering the solver towards solutions that minimize the resulting expected amortised complexity of the function under analysis.
Evaluation. We present results for the benchmarks described in Sect. 2 (plus a randomised version of splay heaps, the source code can be found in [19]) in Table 1. Table 3 details the computation time for type checking our results. Note that type inference takes considerably longer (tens of hours). To the best of our knowledge this is the first time that an expected amortised cost could be inferred for these data structures.
By comparing the costs of the operations of randomised splay trees and heaps to the costs of their deterministic versions (see Table 1), one can see the randomised variants have equal or lower complexity in all cases (as noted in Table 2 we have set the costs of the recursive call and the rotation to , such that in the deterministic case, which corresponds to a coin toss with \(p=1\), these costs will always add up to one). Clearly, setting the costs of the recursion to the same value as the cost of the rotation does not need to reflect the relation of the actual costs. A more accurate estimation of the relation of these two costs will likely require careful experimentation with data structure implementations, which we consider orthogonal to our work. Instead, we report that our analysis is readily adapted to different costs and different coin toss probabilities. We present an evaluation for different values of p, recursion cost c and rotation cost \(1c\) in Table 2. In preparing Table 2 the template \(q^{*} \cdot \mathsf {rk}(t) + q_{(1, 0)} \cdot \log _2(t) + q_{(0, 2)}\) was used for performance reasons. The memory usage according to Z3’s “max memory” statistic was 7129MiB per instance. The total runtime was 1H45M, with an average of 11M39S and a median of 2M33S. Two instances took longer time (36M and 49M).
Deterministic Benchmarks. For comparison we have also evaluated our tool \(\mathsf {ATLAS}\) on the benchmarks of [18]. All results could be reproduced by our implementation. In fact, for the function it yields an improvement of , ie. compared to . We note that we are able to report better results because we have generalised the resource functions \(p_{\left( a_{1} \ldots \ldots a_{m}, b\right) }\left( t_{1}, \ldots , t_{m}\right) :=\mathrm {log} _{2}\left( a_{1} \cdot \left t_{1}\right +\right. \left. \cdots +a_{m} \cdot \left t_{m}\right +b\right) \) to also allow negative values for b (under the condition that \(\sum _i a_i + b \ge 1\)) and our generalised \((\mathsf {{let:tree}})\) rule can take advantage of these generalized resource functions (see [19] for a statement of the rule and the proof of its soundness as part of the proof of Theorem 3).
7 Conclusion
In this paper, we present the first fullyautomated expected amortised cost analysis of selfadjusting data structures, that is, of randomised splay trees, randomised splay heaps and randomised meldable heaps, which so far have only (semi)manually been analysed in the literature.
In future work, we envision to extend our analysis to related probabilistic settings such as skip lists [30], randomised binary search trees [20] and randomised treaps [8]. We note that adaptation of the framework developed in this paper to new benchmarks will likely require to identify new potential functions and the extension of the typeeffectsystem with typing rules for these potential functions. Further, on more theoretical grounds we want to clarify the connection of the here proposed expected amortised cost analysis with Kaminski’s \(\mathsf {ert}\)calculus, cf. [15], and study whether the expected cost transformer is conceivable as a potential function.
Notes
 1.
An amortised analysis may always default to a wortcase analysis. In particular the analysis of in this section can be considered as a worstcase analysis. However, we use the example to illustrate the general setup of our amortised analysis.
 2.
For ease of presentation, we elide the underlying semantics for now and simply write “” for the resulting tree \(t'\), obtained after evaluating .
 3.
The restriction to at most one tree type in the resulting type is nonessential and could be lifted. However, as our benchmark functions do not require this extension, we have elided it for ease of presentation.
References
Albers, S., Karpinski, M.: Randomized splay trees: theoretical and experimental results. IPL 81(4), 213–221 (2002). https://doi.org/10.1016/S00200190(01)002307
Avanzini, M., Barthe, G., Lago, U.D.: On continuationpassing transformations and expected cost analysis. PACMPL 5(ICFP), 1–30 (2021). https://doi.org/10.1145/3473592
Avanzini, M., Lago, U.D., Ghyselen, A.: Typebased complexity analysis of probabilistic functional programs. In: Proceedings of 34th LICS, pp. 1–13. IEEE (2019). https://doi.org/10.1109/LICS.2019.8785725
Avanzini, M., Lago, U.D., Yamada, A.: On probabilistic term rewriting. Sci. Comput. Program. 185, 102338 (2020). https://doi.org/10.1016/j.scico.2019.102338
Avanzini, M., Moser, G., Schaper, M.: A modular cost analysis for probabilistic programs. PACMPL 4(OOPSLA), 172:1–172:30 (2020). https://doi.org/10.1145/3428240
Barthe, G., Katoen, J.P., Silva, A. (eds.): Foundations of Probabilistic Programming. Cambridge University Press, Cambridge (2020). https://doi.org/10.1017/9781108770750
Batz, K., Kaminski, B.L., Katoen, J., Matheja, C., Noll, T.: Quantitative separation logic: a logic for reasoning about probabilistic pointer programs. PACMPL 3(POPL), 34:1–34:29 (2019). https://doi.org/10.1145/3290347
Blelloch, G.E., ReidMiller, M.: Fast set operations using treaps. In: Proceedings of 10th SPAA, pp. 16–26 (1998). https://doi.org/10.1145/277651.277660
Bournez, O., Garnier, F.: Proving positive almostsure termination. In: Giesl, J. (ed.) RTA 2005. LNCS, vol. 3467, pp. 323–337. Springer, Heidelberg (2005). https://doi.org/10.1007/9783540320333_24
Chatterjee, K., Fu, H., Murhekar, A.: Automated recurrence analysis for almostlinear expectedruntime bounds. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 118–139. Springer, Cham (2017). https://doi.org/10.1007/9783319633879_6
Eberl, M., Haslbeck, M.W., Nipkow, T.: Verified analysis of random binary tree structures. J. Autom. Reason. 64(5), 879–910 (2020). https://doi.org/10.1007/s10817020095450
Fürer, M.: Randomized splay trees. In: Proceedings of 10th SODA, pp. 903–904 (1999). http://dl.acm.org/citation.cfm?id=314500.315079
Gambin, A., Malinowski, A.: Randomized meldable priority queues. In: Rovan, B. (ed.) SOFSEM 1998. LNCS, vol. 1521, pp. 344–349. Springer, Heidelberg (1998). https://doi.org/10.1007/3540494774_26
Hofmann, M., Leutgeb, L., Moser, G., Obwaller, D., Zuleger, F.: Typebased analysis of logarithmic amortised complexity. MSCS (2021). https://doi.org/10.1017/S0960129521000232
Kaminski, B.L., Katoen, J., Matheja, C., Olmedo, F.: Weakest precondition reasoning for expected runtimes of randomized algorithms. JACM 65(5), 30:1–30:68 (2018). https://doi.org/10.1145/3208102
Kozen, D.: Semantics of probabilistic programs. J. Comput. Syst. Sci. 22(3), 328–350 (1981)
Kozen, D.: A probabilistic PDL. JCSC 30(2), 162–178 (1985). https://doi.org/10.1016/00220000(85)900121
Leutgeb, L., Moser, G., Zuleger, F.: ATLAS: automated amortised complexity analysis of selfadjusting data structures. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12760, pp. 99–122. Springer, Cham (2021). https://doi.org/10.1007/9783030816889_5
Leutgeb, L., Moser, G., Zuleger, F.: Automated expected amortised cost analysis of probabilistic data structures. arXiv:2206.03537 (2022)
Martínez, C., Roura, S.: Randomized binary search trees. JACM 45(2), 288–323 (1998). https://doi.org/10.1145/274787.274812
McIver, A., Morgan, C., Kaminski, B.L., Katoen, J.: A new proof rule for almostsure termination. PACMPL 2(POPL), 33:1–33:28 (2018). https://doi.org/10.1145/3158121
Meyer, F., Hark, M., Giesl, J.: Inferring expected runtimes of probabilistic integer programs using expected sizes. In: TACAS 2021. LNCS, vol. 12651, pp. 250–269. Springer, Cham (2021). https://doi.org/10.1007/9783030720162_14
Mitzenmacher, M., Upfal, E.: Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, Cambridge (2005). https://doi.org/10.1017/CBO9780511813603
Moosbrugger, M., Bartocci, E., Katoen, J.P., Kovács, L.: Automated termination analysis of polynomial probabilistic programs. In: ESOP 2021. LNCS, vol. 12648, pp. 491–518. Springer, Cham (2021). https://doi.org/10.1007/9783030720193_18
Motwani, R., Raghavan, P.: Randomized algorithms. In: Algorithms and Theory of Computation Handbook. Cambridge University Press (1999). https://doi.org/10.1201/9781420049503c16
de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540788003_24
Ngo, V.C., Carbonneaux, Q., Hoffmann, J.: Bounded expectations: resource analysis for probabilistic programs. In: Proceedings of 39th PLDI, pp. 496–512 (2018). https://doi.org/10.1145/3192366.3192394
Nipkow, T., Brinkop, H.: Amortized complexity verified. JAR 62(3), 367–391 (2019)
Pierce, B.: Types and Programming Languages. MIT Press, Cambridge (2002)
Pugh, W.: Skip lists: a probabilistic alternative to balanced trees. CACM 33(6), 668–676 (1990). https://doi.org/10.1145/78973.78977
Schoenmakers, B.: A systematic analysis of splaying. IPL 45(1), 41–50 (1993)
Schoenmakers, B.: Data structures and amortized complexity in a functional setting. Ph.D. thesis, Eindhoven University of Technology (1992)
Schrijver, A.: Theory of Linear and Integer Programming. Wiley, Hoboken (1999)
Sleator, D., Tarjan, R.: Selfadjusting binary search trees. JACM 32(3), 652–686 (1985)
Takisaka, T., Oyabu, Y., Urabe, N., Hasuo, I.: Ranking and repulsing supermartingales for reachability in probabilistic programs. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 476–493. Springer, Cham (2018). https://doi.org/10.1007/9783030010904_28
Tarjan, R.: Amortized computational complexity. SIAM J. Alg. Disc. Meth 6(2), 306–318 (1985)
Wang, D., Kahn, D.M., Hoffmann, J.: Raising expectations: automating expected cost analysis with types. PACMPL 4(ICFP), 110:1–110:31 (2020). https://doi.org/10.1145/3408992
Wang, P., Fu, H., Goharshady, A.K., Chatterjee, K., Qin, X., Shi, W.: Cost analysis of nondeterministic probabilistic programs. In: Proceedings of 40th PLDI, pp. 204–220. ACM (2019)
Winskel, G.: The Formal Semantics of Programming Languages. FCS, MIT Press (1993). https://doi.org/10.7551/mitpress/3054.003.0004
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Author(s)
About this paper
Cite this paper
Leutgeb, L., Moser, G., Zuleger, F. (2022). Automated Expected Amortised Cost Analysis of Probabilistic Data Structures. In: Shoham, S., Vizel, Y. (eds) Computer Aided Verification. CAV 2022. Lecture Notes in Computer Science, vol 13372. Springer, Cham. https://doi.org/10.1007/9783031131882_4
Download citation
DOI: https://doi.org/10.1007/9783031131882_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783031131875
Online ISBN: 9783031131882
eBook Packages: Computer ScienceComputer Science (R0)