1 Introduction

Probabilistic variants of well-known computational models such as automata, Turing machines or the \(\lambda \)-calculus have been studied since the early days of computer science (see [16, 17, 25] for early references). One of the main reasons for considering probabilistic models is that they often allow for the design of more efficient algorithms than their deterministic counterparts (see e.g. [6, 23, 25]). Another avenue for the design of efficient algorithms has been opened up by Sleator and Tarjan [34, 36] with their introduction of the notion of amortised complexity. Here, the cost of a single data structure operation is not analysed in isolation but as part of a sequence of data structure operations. This allows for the design of algorithms where the cost of an expensive operation is averaged out over multiple operations and results in a good overall worst-case cost. Both methodologies—probabilistic programming and amortised complexity—can be combined for the design of even more efficient algorithms, as for example in randomized splay trees [1], where a rotation in the splaying operation is only performed with some probability (which improves the overall performance by skipping some rotations while still guaranteeing that enough rotations are performed).

In this paper, we present the first fully-automated expected amortised cost analysis of probabilistic data structures, that is, of randomised splay trees, randomised splay heaps, randomised meldable heaps and a randomised analysis of a binary search tree. These data structures have so far only (semi-)manually been analysed in the literature. Our analysis is based on a novel type-and-effect system, which constitutes a generalisation of the type system studied in [14, 18] to the non-deterministic and probabilistic setting, as well as an extension of the type system introduced in [37] to sublinear bounds and non-determinism. We provide a prototype implementation that is able to fully automatically analyse the case studies mentioned above. We summarise here the main contributions of our article: (i) We consider a first-order functional programming language with support for sampling over discrete distributions, non-deterministic choice and a ticking operator, which allows for the specification of fine-grained cost models. (ii) We introduce compact small-step as well as big-step semantics for our programming language. These semantics are equivalent wrt. the obtained normal forms (i.e., the resulting probability distributions) but differ wrt. the cost assigned to non-terminating computations. (iii) Based on [14, 18], we develop a novel type-and-effect system that strictly generalises the prior approaches from the literature. (iv) We state two soundness theorems (see Sect. 5.3) based on two different—but strongly related—typing rules of ticking. The two soundness theorems are stated wrt. the small-step resp. big-step semantics because these semantics precisely correspond to the respective ticking rule. The more restrictive ticking rule can be used to establish (positive) almost sure termination (AST), while the more permissive ticking rule supports the analysis of a larger set of programs (which can be very useful in case termination is not required or can be established by other means); in fact, the more permissive ticking rule is essential for the precise cost analysis of randomised splay trees. We note that the two ticking rules and corresponding soundness theorems do not depend on the details of the type-and-effect system, and we believe that they will be of independent interest (e.g., when adapting the framework of this paper to other benchmarks and cost functions). (v) Our prototype implementation \(\mathsf {ATLAS}\) strictly extends the earlier version reported on in [18], and all our earlier evaluation results can be replicated (and sometimes improved).

With our implementation and the obtained experimental results we make two contributions to the complexity analysis of data structures:

  1. 1.

    We automatically infer bounds on the expected amortised cost, which could previously only be obtained by sophisticated pen-and-paper proofs. In particular, we verify that the amortised costs of randomised variants of self-adjusting data structures improve upon their non-randomised variants. In Table 1 we state the expected cost of the randomised data structures considered and their deterministic counterparts; the benchmarks are detailed in Sect. 2.

  2. 2.

    We establish a novel approach to the expected cost analysis of data structures. Our research has been greatly motivated by the detailed study of Albers et al. in [1] of the expected amortised costs of randomised splaying. While [1] requires a sophisticated pen-and-paper analysis, our approach allows us to fully-automatically compare the effect of different rotation probabilities on the expected cost (see Table 2 of Sect. 6).

Table 1. Expected Amortised Cost of Randomised Data Structures. We also state the deterministic counterparts considered in [18] for comparison.

Related Work. The generalisation of the model of computation and the study of the expected resource usage of probabilistic programs has recently received increased attention (see e.g. [2, 4, 5, 7, 10, 11, 15, 21, 22, 24, 27, 37, 38]). We focus on related work concerned with automations of expected cost analysis of deterministic or non-deterministic, probabilistic programs—imperative or functional. (A probabilistic program is called non-deterministic, if it additionally makes use of non-deterministic choice.)

In recent years the automation of expected cost analysis of probabilistic data structures or programs has gained momentum, cf. [2,3,4,5, 22, 24, 27, 37, 38]. Notably, the Absynth prototype by [27], implement Kaminski’s \(\mathsf {ert}\)-calculus, cf. [15] for reasoning about expected costs. Avanzini et al. [5] generalise the \(\mathsf {ert}\)-calculus to an expected cost transformer and introduce the tool eco-imp, which provides a modular and thus a more efficient and scalable alternative for non-deterministic, probabilistic programs. In comparison to these works, we base our analysis on a dedicated type system finetuned to express sublinear bounds; further our prototype implementation \(\mathsf {ATLAS}\) derives bounds on the expected amortised costs. Neither is supported by Absynth or eco-imp.

Martingale based techniques have been implemented, e.g., by Peixin Wang et al. [38]. Related results have been reported by Moosbrugger et al. [24]. Meyer et al. [22] provide an extension of the KoAT tool, generalising the concept of alternating size and runtime analysis to probabilistic programs. Again, these innovative tools are not suited to the benchmarks considered in our work. With respect to probabilistic functional programs, Di Wang et al. [37] provided the only prior expected cost analysis of (deterministic) probabilistic programs; this work is most closely related to our contributions. Indeed, our typing rule \((\mathsf {{ite:coin}})\) stems from [37] and the soundness proof wrt. the big-step semantics is conceptually similar. Nevertheless, our contributions strictly generalise their results. First, our core language is based on a simpler semantics, giving rise to cleaner formulations of our soundness theorems. Second, our type-and-effect provides two different typing rules for ticking, a fact we can capitalise on in additional strength of our prototype implementation. Finally, our amortised analysis allows for logarithmic potential functions.

A bulk of research concentrates on specific forms of martingales or Lyapunov ranking functions. All these works, however, are somewhat orthogonal to our contributions, as foremostly termination (i.e. AST or PAST) is studied, rather than computational complexity. Still these approaches can be partially suited to a variety of quantitative program properties, see [35] for an overview, but are incomparable in strength to the results established here.

Structure. In the next section, we provide a bird’s eye view on our approach. Sections 3 and 4 detail the core probabilistic language employed, as well as its small- and big-step semantics. In Sect. 5 we introduce the novel type-and-effect system formally and state soundness of the system wrt. the respective semantics. In Sect. 6 we present evaluation results of our prototype implementation \(\mathsf {ATLAS}\). Finally, we conclude in Sect. 7. All proofs, part of the benchmarks and the source codes are given in [19].

2 Overview of Our Approach and Results

In this section, we first sketch our approach on an introductory example and then detail the benchmarks and results depicted in Table 1 in the Introduction.

2.1 Introductory Example

Consider the definition of the function , depicted in Fig. 1. The expected amortised complexity of is \(\log _2(|{t} |)\), where \(|{t} |\) denotes the size of a tree t (defined as the number of leaves of the tree).Footnote 1 Our analysis is set up in terms of template potential functions with unknown coefficients, which will be instantiated by our analysis. Following [14, 18], our potential functions are composed of two types of resource functions, which can express logarithmic amortised cost: For a sequence of n trees \(t_{1}, \ldots , t_{n}\) and coefficients \(a_i \in {\mathbb {N}}, b \in {\mathbb {Z}}\), with \(\sum _{i=1}^n a_i + b \geqslant 0\), the resource function \(p_{\left( a_{1}, \ldots , a_{n}, b\right) }\left( t_{1}, \ldots , t_{n}\right) \mathrm {log}_{2}\left( a_{1} \cdot \left| t_{1}\right| +\cdots +a_{n} \cdot \left| t_{n}\right| +b\right) \) denotes the logarithm of a linear combination of the sizes of the trees. The resource function \(\mathsf {rk}(t)\), which is a variant of Schoenmakers’ potential, cf. [28, 31, 32], is inductively defined as (i) ; (ii) , where l, r are the left resp. right child of the tree , and d is some data element that is ignored by the resource function. (We note that \(\mathsf {rk}(t)\) is not needed for the analysis of but is needed for more involved benchmarks, e.g. randomised splay trees.) With these resource functions at hand, our analysis introduces the coefficients \(q_*\), \(q_{(1,0)}\), \(q_{(0,2)}\), \(q'_*\), \(q'_{(1,0)}\), \(q'_{(0,2)}\) and employs the following Ansatz:Footnote 2

figure h

Here, denotes the expected cost of executing on tree t, where the cost is given by the ticks as indicated in the source code (each tick accounts for a recursive call). The result of our analysis will be an instantiation of the coefficients, returning \(q_{(1,0)} = 1\) and zero for all other coefficients, which allows to directly read off the logarithmic bound \(\log _2(|{t} |)\) of .

Our analysis is formulated as a type-and-effect system, introducing the above template potential functions for every subexpression of the program under analysis. The typing rules of our system give rise to a constraint system over the unknown coefficients that capture the relationship between the potential functions of the subexpressions of the program. Solving the constraint system then gives a valid instantiation of the potential function coefficients. Our type-and-effect system constitutes a generalisation of the type system studied in [14, 18] to the non-deterministic and probabilistic setting, as well as an extension of the type system introduced in [37] to sublinear bounds and non-determinism.

Fig. 1.
figure 1


In the following, we survey our type-and-effect system by means of example . A partial type derivation is given in Fig. 2. For brevity, type judgements and the type rules are presented in a simplified form. In particular, we restrict our attention to tree types, denoted as \(\mathsf {T}\). This omission is inessential to the actual complexity analysis. For the full set of rules see [19]. We now discuss this type derivation step by step.

Let e denote the body of the function definition of , cf. Fig. 1. Our automated analysis infers an annotated type by verifying that the type judgement \({t}{:}{\mathsf {T}}{\mid }Q\,\vdash \,{e}{:}{\mathsf {T}}{\mid }Q'\) is derivable. Types are decorated with annotations \(Q \mathrel {:=}[q_*,q_{(1,0)},q_{(0,2)}]\) and \(Q' \mathrel {:=}[q'_*,q'_{(1,0)},q'_{(0,2)}]\)—employed to express the potential carried by the arguments to and its results. Annotations fix the coefficients of the resource functions in the corresponding potential functions, e.g., (i) \(\varPhi ({{t}{:}{\mathsf {T}}} {\mid } {Q}) \mathrel {:=}q_*\cdot \mathsf {rk}(t) + q_{(1,0)} \cdot p_{(1,0)}(t) + q_{(0,2)}\cdot p_{(0,2)}(t)\) and (ii) \(\varPhi ({{e}{:}{\mathsf {T}}} {\mid } {Q'}) \mathrel {:=}q'_*\cdot \mathsf {rk}(e) + q'_{(1,0)} \cdot p_{(1,0)}(e) + q'_{(0,2)} \cdot p_{(0,2)}(e)\).

By our soundness theorems (see Sect. 5.3), such a typing guarantees that the expected amortised cost of is bounded by the expectation (wrt. the value distribution in the limit) of the difference between \(\varPhi ({{t}{:}{\mathsf {T}}} {\mid } {Q})\) and . Because e is a expression, the following rule is applied (we only state a restricted rule here, the general rule can be found in [19]):

figure q

Here \(e_1\) denotes the subexpression of e that corresponds to the case of . Apart from the annotations Q, \(Q_1\) and \(Q'\), the rule \((\mathsf {{match}})\) constitutes a standard type rule for pattern matching. With regard to the annotations Q and \(Q_1\), \((\mathsf {{match}})\) ensures the correct distribution of potential by inducing the constraints

$$\begin{aligned} q^1_1 = q^1_2 = q_*&q^1_{(1,1,0)} = q_{(1,0)}&q^1_{(1,0,0)} = q^1_{(0,1,0)} = q_*&q^1_{(0,0,2)} = q_{(0,2)} {\;,}\end{aligned}$$

where the constraints are immediately justified by recalling the definitions of the resource functions \(p_{\left( a_{1}, \ldots , a_{n}, b\right) }\left( t_{1}, \ldots , t_{n}\right) :=\mathrm {log} _{2}\left( a_{1} \cdot \left| t_{1}\right| +\cdots +a_{n} \cdot \left| t_{n}\right| +b\right) \) and \(\mathsf {rk}(t) = \mathsf {rk}(l) + \log _2(|{l} |) + \log _2(|{r} |) + \mathsf {rk}(r)\).

Fig. 2.
figure 2

Partial Type Derivation for Function

The next rule is a structural rule, representing a weakening step that rewrites the annotations of the variable context. The rule \((\mathsf {{w}})\) allows a suitable adaptation of the coefficients based on the following inequality, which holds for any substitution \(\sigma \) of variables by values, \(\varPhi ({\sigma };{{l}{:}{\mathsf {T}}, {r}{:}{\mathsf {T}}} {\mid } {Q_1}) \geqslant \varPhi ({\sigma };{{l}{:}{\mathsf {T}}, {r}{:}{\mathsf {T}}} {\mid } {Q_2})\).

figure u

In our prototype implementation this comparison is performed symbolically. We use a variant of Farkas’ Lemma [19, 33] in conjunction with simple mathematical facts about the logarithm to linearise this symbolic comparison, namely the monotonicity of the logarithm and the fact that \(2 + \log _2(x) + \log _2(y) \leqslant 2\log _2(x+y)\) for all \(x,y \geqslant 1\). For example, Farkas’ Lemma in conjunction with the latter fact gives rise to

$$\begin{aligned} q^1_{(0,0,2)} + 2f&\geqslant q^2_{(0,0,2)}&q^1_{(1,1,0)} - 2f&\geqslant q^2_{(1,1,0)} \\ q^1_{(1,0,0)} + f&\geqslant q^2_{(1,0,0)}&q^1_{(0,1,0)} + f&\geqslant q^2_{(0,1,0)}{\;,}\end{aligned}$$

for some fresh rational coefficient \(f \geqslant 0\) introduced by Farkas’ Lemma. After having generated the constraint system for , the solver is free to instantiate f as needed. In fact in order to discover the bound \(\log _2(|{t} |)\) for , the solver will need to instantiate , corresponding to the inequality .

Fig. 3.
figure 3

Partial function of Randomised Meldable Heaps

So far, the rules did not refer to sampling and are unchanged from their (non-probabilistic) counterpart introduced in [14, 18]. The next rule, however, formalises a coin toss, biased with probability p. Our general rule \((\mathsf {{ite:coin}})\) is depicted in Fig. 12 and is inspired by a similar rule for coin tosses that has been recently been proposed in the literature, cf. [37]. This rule specialises as follows to our introductory example:

figure y

Here \(e_2\) and \(e_3\) respectively, denote the subexpressions of the conditional and in addition the crucial condition holds. This condition, expressing that the corresponding annotations are subject to the probability of the coin toss, gives rise to the following constraints (among others)

In the following, we will only consider one alternative of the coin toss and proceed as in the partial type derivation depicted in Fig. 1 (ie. we state the -branch and omit the symmetric -branch). Thus next, we apply the rule for the  expression. This rule is the most involved typing rule in the system proposed in [14, 18]. However, for our leading example it suffices to consider the following simplified variant:

Fig. 4.
figure 4

Partial function of Randomised Splay Trees (zigzig-case)

figure ad

Focusing on the annotations, the rule \((\mathsf {{let:tree}})\) suitably distributes potential assigned to the variable context, embodied in the annotation \(Q_3\), to the recursive call within the expression (via annotation \(Q_4\)) and the construction of the resulting tree (via annotation \(Q_7\)). The distribution of potential is facilitated by generating constraints that can roughly be stated as two “equalities”, that is, (i) “\(Q_3 = Q_4 + D\)”, and (ii) “\(Q_7 = D + Q_6\)”. Equality (i) states that the input potential is split into some potential \(Q_4\) used for typing and some remainder potential D (which however is not constructed explicitly and only serves as a placeholder for potential that will be passed on). Equality (ii) states that the potential \(Q_7\) used for typing equals the remainder potential D plus the leftover potential \(Q_6\) from the typing of . The \((\mathsf {{tick:now}})\) rule then ensures that costs are properly accounted for by generating constraints for \(Q_4 = Q_5 + 1\) (see Fig. 2). Finally, the type derivation ends by the application rule, denoted as \((\mathsf {{app}})\), that verifies that the recursive call is well-typed wrt. the (annotated) signature of the function , ie. the rule enforces that \(Q_5 = Q\) and \(Q_6 = Q'\). We illustrate (a subset of) the constraints induced by \((\mathsf {{let}})\), \((\mathsf {{tick:now}})\) and \((\mathsf {{app}})\):

$$\begin{aligned} q^3_{(1,0,0)}&= q^4_{(1,0)}&q^3_{(0,1,0)}&= q^7_{(0,1,0)}&q'_1&= q^6_1&q^4_{(0,2)}&= q^5_{(0,2)} + 1 \\ q^3_{(0,0,2)}&= q^4_{(0,2)}&q^3_2&= q^7_2&q'_{(1,0)}&= q^6_{(1,0)}&q^4_{(1,0)}&= q^5_{(1,0)} \\ q^3_1&= q^4_1&q'_{(0,2)}&= q^6_{(0,2)}&q^6_1&= q^7_1&q^5_{(1,0)}&= q_{(1,0)} {\;,}\end{aligned}$$

where (i) the constraints in the first three columns—involving the annotations \(Q_3\), \(Q_4\), \(Q_6\) and \(Q_7\)—stem from the constraints of the rule \((\mathsf {{let:tree}})\); (ii) the constraints in the last column—involving \(Q_4\), \(Q_5\), Q and \(Q'\)—stem from the constraints of the rule \((\mathsf {{tick:now}})\) and \((\mathsf {{app}})\). For example, \(q^3_{(1,0,0)} = q^4_{(1,0)}\) and \(q^3_{(0,1,0)} = q^7_{(0,1,0)}\) distributes the part of the logarithmic potential represented by \(Q_3\) to \(Q_4\) and \(Q_7\); \(q^6_1 = q^7_1\) expresses that the rank of the result of evaluating the recursive call can be employed in the construction of the resulting tree ; \(q^4_{(1,0)} = q^5_{(1,0)}\) and \(q^4_{(0,2)} = q^5_{(0,2)} + 1\) relate the logarithmic resp. constant potential according to the tick rule, where the addition of one accounts for the cost embodied by the tick rule; \(q^5_{(1,0)} = q_{(1,0)}\) stipulates that the potential at the recursive call site must match the function type.

Our prototype implementation \(\mathsf {ATLAS}\) collects all these constraints and solves them fully automatically. Following [14, 18], our implementation in fact searches for a solution that minimises the resulting complexity bound. For the function, our implementation finds a solution that sets \(q_{(1,0)}\) to 1, and all other coefficients to zero. Thus, the logarithmic bound \(\log _2(|{t} |)\) follows.

2.2 Overview of Benchmarks and Results

Randomised Meldable Heaps. Gambin et al. [13] proposed meldable heaps as a simple priority-queue data structure that is guaranteed to have expected logarithmic cost for all operations. All operations can be implemented in terms of the function, which takes two heaps and returns a single heap as a result. The partial source code of is given in Fig. 3 (the full source code of all examples can be found in [19]). Our tool \(\mathsf {ATLAS}\) fully-automatically infers the bound \(\log _2(|{h_1} |) + \log _2(|{h_2} |)\) on the expected cost of .

Fig. 5.
figure 5

  function of a Binary Search Tree with randomized comparison

Randomised Splay Trees. Albers et al. in [1] propose these splay trees as a variation of deterministic splay trees [34], which have better expected runtime complexity (the same computational complexity in the O-notation but with smaller constants). Related results have been obtained by Fürer [12]. The proposal is based on the observation that it is not necessary to rotate the tree in every (recursive) splaying operation but that it suffices to perform rotations with some fixed positive probability in order to reap the asymptotic benefits of self-adjusting search trees. The theoretical analysis of randomised splay trees [1] starts by refining the cost model of [34], which simply counts the number of rotations, into one that accounts for recursive calls with a cost of c and for rotations with a cost of d.

We present a snippet of a functional implementation of randomised splay trees in Fig. 4. We note that in this code snippet we have set ; this choice is arbitrary; we have chosen these costs in order to be able to compare the resulting amortised costs to the deterministic setting of [18], where the combined cost of the recursive call and rotation is set to 1; we note that our analysis requires fixed costs c and d but these constants can be chosen by the user; for example one can set \(c=1\) and \(d=2.75\) corresponding to the costs observed during the experiments in [1]. Likewise the probability of the coin toss has been arbitrarily set to but could be set differently by the user. (We remark that to the best of our knowledge no theoretical analysis has been conducted on how to chose the best value of p for given costs c and d.) Our prototype implementation is able to fully automatically infer an amortised complexity bound of for (with c, d and p fixed as above), which improves on the complexity bound of for the deterministic version of as reported in [18], confirming that randomisation indeed improves the expected runtime.

We remark on how the amortised complexity bound of for is computed by our analysis. Our tool \(\mathsf {ATLAS}\) computes an annotated type for that corresponds to the inequality

figure ao

By setting as potential function in the sense of Tarjan and Sleator [34, 36], the above inequality allows us to directly read out an upper bound on the amortised complexity of (we recall that the amortised complexity in the sense of Tarjan and Sleator is defined as the sum of the actual costs plus the output potential minus the input potential):

figure aq

Probabilistic Analysis of Binary Search Trees. We present a probabilistic analysis of a deterministic binary search tree, which offers the usual , , and operations, where uses   given in Fig. 6, as a subroutine (the source code of the missing operations is given in [19]). We assume that the elements inserted, deleted and searched for are equally distributed; hence, we conduct a probabilistic analysis by replacing every comparison with a coin toss of probability one half. We will refer to the resulting data structure as Coin Search Tree in our benchmarks. The source code of is given in Fig. 5.

Our tool \(\mathsf {ATLAS}\) infers an logarithmic expected amortised cost for all operations, ie., for and we obtain (i) ; and (ii) , from which we obtain an expected amortised cost of and respectively.

Fig. 6.
figure 6

  function of a Coin Search Tree with one rotation

3 Probabilistic Functional Language

Preliminaries. Let \({\mathbb {R}^+_0}\) denote the non-negative reals and \({\mathbb {R}^{+\infty }_0}\) their extension by \(\infty \). We are only concerned with discrete distributions and drop “discrete” in the following. Let A be a countable set and let \(\mathsf {D}(A)\) denote the set of (sub)distributions d over A, whose support \(\mathsf {supp}(\mu ) \mathrel {:=}\{a \in A \mid \mu (a) \not = 0\}\) is countable. Distributions are denoted by Greek letters. For \(\mu \in \mathsf {D}(A)\), we may write \(\mu = \{a^{p_i}_i\}_{i \in I}\), assigning probabilities \(p_i\) to \(a_i \in A\) for every \(i \in I\), where I is a suitable chosen index set. We set \(|{\mu } | \mathrel {:=}\sum _{i \in I} p_i\). If the support is finite, we simply write \(\mu = \{a^{p_1}_1,\dots ,a^{p_n}_n\}\) The expected value of a function \(f :A \rightarrow {\mathbb {R}^+_0}\) on \(\mu \in \mathsf {D}(A)\) is defined as \(\mathbb {E}_{\mu }(f) \mathrel {:=}\sum _{a \in \mathsf {supp}(\mu )} \mu (a) \cdot f(a)\). Further, we denote by \(\sum _{i \in I} p_i \cdot \mu _i\) the convex combination of distributions \(\mu _i\), where \(\sum _{i\in I} p_i \leqslant 1\). As by assumption \(\sum _{i\in I} p_i \leqslant 1\), \(\sum _{i \in I} p_i \cdot \mu _i\) is always a (sub-)distribution.

In the following, we also employ a slight extension of (discrete) distributions, dubbed multidistributions [4]. Multidistributions are countable multisets \(\{{a_i^{p_i} }\}_{i \in I}\) over pairs \(p_i :a_i\) of probabilities \(0 < p_i \leqslant 1\) and objects \(a_i \in A\) with \(\sum _{i \in I} p_i \leqslant 1\). (For ease of presentation, we do not distinguish notationally between sets and multisets.) Multidistributions over objects A are denoted by \(\mathsf {M}(A)\). For a multidistribution \(\mu \in \mathsf {M}(A)\) the induced distribution \(\overline{\mu } \in \mathsf {D}(A)\) is defined in the obvious way by summing up the probabilities of equal objects.

Fig. 7.
figure 7

A Core Probabilistic (First-Order) Programming Language

Syntax. In Fig. 7, we detail the syntax of our core probabilistic (first-order) programming language. With the exception of ticks, expressions are given in to simplify the presentation of the operational semantics and the typing rules. In order to ease the readability, we make use of mild syntactic sugaring in the presentation of actual code (as we already did above).

To make the presentation more succinct, we assume only the following types: a set of base types \(\mathcal {B}\) such as Booleans , integers \(\mathsf {Int}\), or rationals \(\mathsf {Rat}\), product types, and binary trees \(\mathsf {T}\), whose internal nodes are labelled with elements \({b}{:}{\mathsf {B}}\), where \(\mathsf {B}\) denotes an arbitrary base type. Values are either of base types, trees or pairs of values. We use lower-case Greek letters (from the beginning of the alphabet) for the denotation of types. Elements \({t}{:}{\mathsf {T}}\) are defined by the following grammar which fixes notation. . The size of a tree is the number of leaves: , .

We skip the standard definition of integer constants \(n \in {\mathbb {Z}}\) as well as variable declarations, cf. [29]. Furthermore, we omit binary operators with the exception of essential comparisons. As mentioned, to represent sampling we make use of a dedicated -- expression, whose guard evaluates to depending on a coin toss with fixed probability. Further, non-deterministic choice is similarly rendered via an -- expression. Moreover, we make use of ticking, denoted by an operator to annotate costs, where a, b are optional and default to one. Following Avanzini et al. [2], we represent ticking as an operation, rather than in , as in Wang et al. [37]. (This allows us to suit a big-step semantics that only accumulates the cost of terminating expressions.) The set of all expressions is denoted \(\mathcal {E}\).

A typing context is a mapping from variables \(\mathcal {V}\) to types. Type contexts are denoted by upper-case Greek letters, and the empty context is denoted \(\varepsilon \). A program \(\mathsf {P}\) consists of a signature \(\mathcal {F}\) together with a set of function definitions of the form \(f~x_1~\dots ~x_n = e_f\), where the \(x_i\) are variables and \(e_f\) an expression. When considering some expression e that includes function calls we will always assume that these function calls are defined by some program \(\mathsf {P}\). A substitution or (environment) \(\sigma \) is a mapping from variables to values that respects types. Substitutions are denoted as sets of assignments: \(\sigma =\left\{ x_{1} \mapsto t_{1}, \ldots , x_{n} \mapsto t_{n}\right\} \). We write \(\mathsf {dom}(\sigma )\) to denote the domain of \(\sigma \).

Fig. 8.
figure 8

One-Step Reduction Rules

4 Operational Semantics

Small-Step Semantics. The small-step semantics is formalised as a (weighted) non-deterministic, probabilistic abstract reduction system [4, 9] over \(\mathsf {M}(\mathcal {E})\). In this way (expected) cost, non-determinism and probabilistic sampling are taken care of. Informally, a probabilistic abstract reduction system is a transition systems where reducts are chosen from a probability distribution. A reduction wrt. such a system is then given by a stochastic process [9], or equivalently, as a reduction relation over multidistributions [4], which arise naturally in the context of non-determinism (we refer the reader to [4] for an example that illustrates the advantage of multidistributions in the presence of non-determinism).

Following [5], we equip transitions with (positive) weights, amounting to the cost of the transition. Formally, a (weighted) Probabilistic Abstract Reduction System (PARS) on a countable set A is a ternary relation \(\cdot {\mathop {\mathrel {\mapsto }}\limits ^{\cdot }} \cdot \subseteq A \times {\mathbb {R}^+_0}\times \mathsf {D}(A)\). For \(a \in A\), a rule \(a {\mathop {\mathrel {\mapsto }}\limits ^{c}} \{ b^{\mu (b)} \}_{b \in A}\) indicates that a reduces to b with probability \(\mu (b)\) and cost \(c \in {\mathbb {R}^+_0}\). Note that any right-hand-side of a PARS is supposed to be a full distribution, ie. the probabilities in \(\mu \) sum up to 1. Given two objects a and b, \(a {\mathop {\mathrel {\mapsto }}\limits ^{c}} \{ b^1 \}\) will be written \(a {\mathop {\mathrel {\mapsto }}\limits ^{c}} b\) for brevity. An object \(a \in A\) is called terminal if there is no rule \(a {\mathop {\mathrel {\mapsto }}\limits ^{c}} \mu \), denoted . We suit the one-step reduction relation \(\mathrel {\mapsto }\) given in Fig. 8 as a (non-deterministic) PARS over multidistributions. As above, we sometimes identify Dirac distributions \(\{e^1\}\) with e. Evaluation contexts are formed by expressions, as in the following grammar: . We denote with \(\mathbb {C}[e]\) the result of substitution the empty context \(\Box \) with expression e. Contexts are exploited to lift the one-step reduction to a ternary weighted reduction relation \({{\mathop {\longrightarrow }\limits ^{\cdot }}} \subseteq {\mathsf {M}(\mathcal {E}) \times {\mathbb {R}^{+\infty }_0}\times \mathsf {M}(\mathcal {E})}\), cf. Fig. 9. (In (Conv), \(\biguplus \) refers to the usual notion of multiset union.)

The relation \({\mathop {\longrightarrow }\limits ^{\cdot }}\) constitutes the operational (small-step) semantics of our simple probabilistic function language. Thus \(\mu {\mathop {\longrightarrow }\limits ^{c}} \nu \) states that the submultidistribution of objects \(\mu \) evolves to a submultidistribution of reducts \(\nu \) in one step, with an expected cost of c. Note that since \({\mathop {\mathrel {\mapsto }}\limits ^{\cdot }}\) is non-deterministic, so is the reduction relation \({\mathop {\longrightarrow }\limits ^{\cdot }}\). We now define the evaluation of an expression \(e \in \mathcal {E}\) wrt. to the small-step relation \({\mathop {\longrightarrow }\limits ^{\cdot }}\): We set \(e {\mathop {\longrightarrow }\limits ^{c}}_\infty \mu \), if there is a (possibly infinite) sequence \(\{{e^1}\} {\mathop {\longrightarrow }\limits ^{c_1}} \mu _1 {\mathop {\longrightarrow }\limits ^{c_2}} \mu _2 {\mathop {\longrightarrow }\limits ^{c_3}} \dots \) with \(c = \sum _{n \geqslant } c_n\) and \(\mu = \lim _{n \rightarrow \infty } {\overline{\mu _n}}\!{\restriction _{V}}\), where \({\overline{\mu _n}}\!{\restriction _{V}}\) denotes the restriction of the distribution \(\overline{\mu _n}\) (induced by the multidistribution \(\mu _n\)) to a (sub-)distribution over values. Note that the \({\overline{\mu _n}}\!{\restriction _{V}}\) form a CPO wrt. the pointwise ordering, cf. [39]. Hence, the fixed point \(\mu =\lim _{n \rightarrow \infty } {\overline{\mu _n}}\!{\restriction _{V}}\) exists. We also write \(e {\mathop {\longrightarrow }\limits ^{}}_\infty \mu \) in case the cost of the evaluation is not important.

(Positive) Almost Sure Termination. A program \(\mathsf {P}\) is almost surely terminating (AST) if for any substitution \(\sigma \), and any evaluation \(e\sigma {\mathop {\longrightarrow }\limits ^{}}_\infty \mu \), we have that \(\mu \) forms a full distribution. For the definition of positive almost sure termination we assume that every statement of \(\mathsf {P}\) is enclosed in an ticking operation with cost one; we note that such a cost models the length of the computation. We say \(\mathsf {P}\) is positively almost surely terminating (PAST), if for any substitution \(\sigma \), and any evaluation \(e\sigma {\mathop {\longrightarrow }\limits ^{c}}_\infty \mu \), we have \(c < \infty \). It is well known that PAST implies AST, cf. [9].

Fig. 9.
figure 9

Probabilistic Reduction Rules of Distributions of Expressions

Big-Step Semantics. We now define the aforementioned big-step semantics. We first define approximate judgments , see Fig. 10, which say that in derivation trees with depth up to n the expression e evaluates to a subdistribution \(\mu \) over values with cost c. We now consider the cost \(c_n\) and subdistribution \(\mu _n\) in for \(n \rightarrow \infty \). Note that the subdistributions \(\mu _n\) in form a CPO wrt. the pointwise ordering, cf. [39]. Hence, there exists a fixed point \(\mu = \lim _{n \rightarrow \infty } \mu _n\). Moreover, we set \(c = \lim _{n \rightarrow \infty } c_n\) (note that either \(c_n\) converges to some real \(c \in {\mathbb {R}^{+\infty }_0}\) or we have \(c = \infty \)). We now define the big-step judgments by setting \(\mu = \lim _{n \rightarrow \infty } \mu _n\) and \(c = \lim _{n \rightarrow \infty } c_n\) for . We want to emphasise that the cost c in only counts the ticks on terminating computations.

Fig. 10.
figure 10

Big-Step Semantics.

Theorem 1

(Equivalence). Let \(\mathsf {P}\) be a program and \(\sigma \) a substitution. Then, (i) implies that \(e\sigma {\mathop {\longrightarrow }\limits ^{c'}}_\infty \mu \) for some \(c' \ge c\), and (ii) \(e\sigma {\mathop {\longrightarrow }\limits ^{c}}_\infty \mu \) implies that for some \(c' \leqslant c\). Moreover, if \(e\sigma \) almost-surely terminates, we can choose \(c=c'\) in both cases.

The provided operational big-step semantics generalises the (big-step) semantics given in [18]. Further, while partly motivated by big-step semantics introduced in [37], our big-step semantics is technically incomparable—due to a different representation of ticking—while providing additional expressivity.

Fig. 11.
figure 11

Ticking Operator. Note that a, b are not variables but literal numbers.

5 Type-and-Effect System for Expected Cost Analysis

5.1 Resource Functions

In Sect. 2, we introduced a variant of Schoenmakers’ potential function, denoted as \(\mathsf {rk}(t)\), and the additional potential functions \(p_{\left( a_{1}, \ldots , a_{n}, b\right) }\left( t_{1}, \ldots , t_{n}\right) =\mathrm {log} _{2}\left( a_{1} \cdot \left| t_{1}\right| +\cdots +a_{n} \cdot \left| t_{n}\right| +b\right) \), denoting the \(\log _2\) of a linear combination of tree sizes. We demand \(\sum _{i=1}^n a_i + b \geqslant 0\) (\(a_i \in {\mathbb {N}}, b \in {\mathbb {Z}}\)) for well-definedness of the latter; \(\log _2\) denotes the logarithm to the base 2. Throughout the paper we stipulate \(\log _2(0) \mathrel {:=}0\) in order to avoid case distinctions. Note that the constant function 1 is representable: \(1 = \lambda t. \log _2(0 \cdot |{t} |+2) = p_{(0, 2)}\). We are now ready to state the resource annotation of a sequence of trees.

Definition 1

A resource annotation or simply annotation of length m is a sequence \(Q=\left[ q_{1}, \ldots , q_{m}\right] \cup \left[ \left( q_{\left( a_{1}, \ldots , a_{m}, b\right) }\right) a_{i}, b \in \mathbb {N}\right] \), vanishing almost everywhere. The length of Q is denoted |Q|. The empty annotation, that is, the annotation where all coefficients are set to zero, is denoted as \(\varnothing \). Let \(t_{1}, \ldots , t_{m}\) be a sequence of trees. Then, the potential of \(t_{m}, \ldots , t_{n}\) wrt. Q is given by

$$\begin{aligned} \varPhi \left( t_{1}, \ldots , t_{m} \mid Q\right) :=\sum _{i=1}^{m} q_{i} \cdot {\text {rk}}\left( t_{i}\right) +\sum _{a_{1}, \ldots , a_{m} \in \mathbb {N}, b \in \mathbb {Z}} q_{\left( a_{1}, \ldots , a_{m}, b\right) } \cdot p_{\left( a_{1}, \ldots , a_{m}, b\right) }\left( t_{1}, \ldots , t_{m}\right) . \end{aligned}$$

In case of an annotation of length 1, we sometimes write \(q_*\) instead of \(q_1\). We may also write \(\varPhi ({{v}{:}{\alpha }} {\mid } {Q})\) for the potential of a value of type \(\alpha \) annotated with Q. Both notations were already used above. Note that only values of tree type are assigned a potential. We use the convention that the sequence elements of resource annotations are denoted by the lower-case letter of the annotation, potentially with corresponding sub- or superscripts.

Example 1

Let t be a tree. To model its potential as \(\log _2(|{t} |)\) in according to Definition 1, we simply set \(q_{(1,0)} \mathrel {:=}1\) and thus obtain \(\varPhi ({t} {\mid } {Q}) = \log _2(|{t} |)\), which describes the potential associated to the input tree t of our leading example above.    \(\square \)

Let \(\sigma \) be a substitution, let \(\varGamma \) denote a typing context and let \(x_{1}: \top , \ldots , x_{n}: \top \) denote all tree types in \(\varGamma \). A resource annotation for \(\varGamma \) or simply annotation is an annotation for the sequence of trees \(x_{1} \sigma , \ldots , x_{n} \sigma \). We define the potential of the annotated context \(\varGamma {\mid } Q\) wrt. a substitution \(\sigma \) as \(\varPhi (\sigma ; \varGamma \mid Q):=\varPhi \left( x_{1} \sigma , \ldots , x_{n} \sigma \mid Q\right) \).

Definition 2

An annotated signature \(\mathcal {F}\) maps functions f to sets of pairs of annotated types for the arguments and the annotated type of the result:

$$\begin{aligned} \mathcal {F}(f):=\left\{ \alpha _{1} \times \cdots \times \alpha _{n}\left| Q \rightarrow \beta _{1} \times \cdots \times \beta _{k}\right| Q^{\prime }\big |m=\left| Q\right| , 1=\left| Q^{\prime } \right| \right\} . \end{aligned}$$

We suppose f takes n arguments of which m are trees; \(m \leqslant n\) by definition. Similarly, the return type may be the product \(\beta _{1} \times \cdots \times \beta _{i}\). In this case, we demand that at most one \(\beta _i\) is a tree type.Footnote 3

Instead of \(\alpha _{1} \times \cdots \times \alpha _{n}\left| Q \rightarrow \beta _{1} \times \cdots \times \beta _{k}\right| Q^{\prime } \in \mathcal {F}(f)\), we sometimes succinctly write \({f}{:}{{\alpha } {\mid } {Q} \rightarrow {\beta } {\mid } {Q'}}\) where \(\alpha \), \(\beta \) denote the product types \(\alpha _{1} \times \cdots \times \alpha _{n}\), \(\beta _{1} \times \cdots \times \beta _{k}\), respectively. It is tacitly understood that the above syntactic restrictions on the length of the annotations Q, \(Q'\) are fulfilled. For every function f, we also consider its cost-free variant from which all ticks have been removed. We collect the cost-free signatures of all functions in the set \(\mathcal {F}^{\text {cf}}\).

Example 2

Consider the function depicted in Fig. 2. Its signature is formally represented as \({\mathsf {T}} {\mid } {Q} \rightarrow {\mathsf {T}} {\mid } {Q'}\), where \(Q \mathrel {:=}[q_*] \cup [(q_{(a,b)})_{a,b \in {\mathbb {Z}}}]\) and \(Q' \mathrel {:=}[q'_*] \cup [(q'_{(a,b)})_{a,b \in {\mathbb {Z}}}]\). We leave it to the reader to specify the coefficients in Q, \(Q'\) so that the rule \((\mathsf {{app}})\) as depicted in Sect. 2 can indeed be employed to type the recursive call of .

Let \(Q = [q_*] \cup [(q_{(a,b)})_{a,b \in {\mathbb {N}}}]\) be an annotation and let K be a rational such that \(q_{(0,2)}+ K \geqslant 0\). Then, \(Q' \mathrel {:=}Q + K\) is defined as follows: \(Q' = [q_*] \cup [(q'_{(a,b)})_{a,b \in {\mathbb {N}}}]\), where \(q'_{(0,2)} \mathrel {:=}q_{(0,2)} +K\) and for all \((a,b) \not = (0,2)\) \(q_{(a,b)}' \mathrel {:=}q_{(a,b)}\). Recall that \(q_{(0,2)}\) is the coefficient of function \(p_{(0,2)}(t) = \log _2(0|{t} | + 2) = 1\), so the annotation \(Q+K\) increments or decrements cost from the potential induced by Q by \(|{K} |\), respectively. Further, we define the multiplication of an annotation Q by a constant K, denoted as \(K \cdot Q\) pointwise. Moreover, let \(P = [p_*] \cup [(p_{(a,b)})_{a,b \in {\mathbb {N}}}]\) be another annotation. Then the addition \(P+Q\) of annotations PQ is similarly defined pointwise.

Fig. 12.
figure 12

Conditional expression that models tossing a coin.

5.2 Typing Rules

The non-probabilistic part of the type system is given in [19]. In contrast to the type system employed in [14, 18], the cost model is not fixed but controlled by the ticking operator. Hence, the corresponding application rule \((\mathsf {{app}})\) has been adapted. Costing of evaluation is now handled by a dedicated ticking operator, cf. Fig. 11. In Fig. 12, we give the rule \((\mathsf {{ite:coin}})\) responsible for typing probabilistic conditionals.

Fig. 13.
figure 13

Function illustrates the difference between \((\mathsf {{tick:now}})\) and \((\mathsf {{tick:defer}})\).

We remark that the core type system, that is, the type system given by Fig. 12 together with the remaining rules [19], ignoring annotations, enjoys subject reduction and progress in the following sense, which is straightforward to verify.

Lemma 1

Let e be such that \({e}{:}{\alpha }\) holds. Then: (i) If \(e {\mathop {\mathrel {\mapsto }}\limits ^{c}} \{ e^{p_i}_i \}_{i \in I}\), then \({e_i}{:}{\alpha }\) holds for all \(i \in I\). (ii) The expression e is in normal form wrt. \({\mathop {\mathrel {\mapsto }}\limits ^{c}}\) iff e is a value.

5.3 Soundness Theorems

A program \(\mathsf {P}\) is called well-typed if for any definition \(f\left( x_{1}, \ldots , x_{n}\right) =e \in \mathrm {P}\) and any annotated signature \(f: \alpha _{1} \times \cdots \times \alpha _{n}|Q \rightarrow \beta | Q^{\prime }\), we have a corresponding typing \({x_1}{:}{\alpha _1},\dots ,{x_k}{:}{\alpha _k}{\mid }Q\,\vdash \,{e}{:}{\beta }{\mid }Q'\). A program \(\mathsf {P}\) is called cost-free well-typed, if the cost-free typing relation is used (which employs the cost-free signatures of all functions).

Theorem 2

(Soundness Theorem for \((\mathsf {{tick:now}})\)). Let \(\mathsf {P}\) be well-typed. Suppose \(\varGamma {\mid }Q\,\vdash \,{e}{:}{\alpha }{\mid }Q'\) and \(e\sigma {\mathop {\longrightarrow }\limits ^{c}}_\infty \mu \). Then \(\varPhi ({\sigma };{\varGamma } {\mid } {Q}) \geqslant c + \mathbb {E}_{\mu }(\lambda v. \varPhi ({v} {\mid } {Q'}))\). Further, if \({\varGamma } {\mid } {Q} \vdash ^{\text {cf}} {e}{:}{\alpha } {\mid } {Q'}\), then \(\varPhi ({\sigma };{\varGamma } {\mid } {Q}) \geqslant \mathbb {E}_{\mu }(\lambda v. \varPhi ({v} {\mid } {Q'}))\).

Corollary 1

Let \(\mathsf {P}\) be a well-typed program such that ticking accounts for all evaluation steps. Suppose \(\varGamma {\mid }Q\,\vdash \,{e}{:}{\alpha }{\mid }Q'\). Then e is positive almost surely terminating (and thus in particular almost surely terminating).

Theorem 3

(Soundness Theorem for \((\mathsf {{tick:defer}})\)). Let \(\mathsf {P}\) be well-typed. Suppose \(\varGamma {\mid }Q\,\vdash \,{e}{:}{\alpha }{\mid }Q'\) and . Then, we have \(\varPhi ({\sigma };{\varGamma } {\mid } {Q}) \geqslant c + \mathbb {E}_{\mu }(\lambda v. \varPhi ({v} {\mid } {Q'}))\). Further, if \({\varGamma } {\mid } {Q} \vdash ^{\text {cf}} {e}{:}{\alpha } {\mid } {Q'}\), then \(\varPhi ({\sigma };{\varGamma } {\mid } {Q}) \geqslant \mathbb {E}_{\mu }(\lambda v. \varPhi ({v} {\mid } {Q'}))\).

We comment on the trade-offs between Theorems 2 and 3. As stated in Corollary 1 the benefit of Theorem 2 is that when every recursive call is accounted for by a tick, then a type derivation implies the termination of the program under analysis. The same does not hold for Theorem 3. However, Theorem 3 allows to type more programs than Theorem 2, which is due to the fact that \((\mathsf {{tick:defer}})\) rule is more permissive than \((\mathsf {{tick:now}})\). This proves very useful, in case termination is not required (or can be established by other means).

We exemplify this difference on the function, see Fig. 13. Theorem 3 supports the derivation of the type , while Theorem 2 does not. This is due to the fact that potential can be “borrowed” with Theorem 3. To wit, from the potential \(\mathsf {rk}(t) + \log _2(|{t} |) + 1\) for one can derive the potential \(\mathsf {rk}(l') + \mathsf {rk}(r')\) for the intermediate context after both let-expression (note there is no +1 in this context, because the +1 has been used to pay for the ticks around the recursive calls). Afterwards one can restore the +1 by weakening \(\mathsf {rk}(l') + \mathsf {rk}(r')\) to (using in addition that \(\mathsf {rk}(t) \geqslant 1\) for all trees t). On the other hand, we cannot “borrow” with Theorem 2 because the rule \((\mathsf {{tick:now}})\) forces to pay the +1 for the recursive call immediately (but there is not enough potential to pay for this). In the same way, the application of rule \((\mathsf {{tick:defer}})\) and Theorem 3 is essential to establish the logarithmic amortised costs of randomised splay trees. (We note that the termination of as well as of is easy to establish by other means: it suffices to observe that recursive calls are on sub-trees of the input tree).

Table 2. Coefficients q such \(q \cdot \log _2(|t|)\) is a bound on the expected amortized complexity of depending on the probability p of a rotation and the cost c of a recursive call, where the cost of a rotation is \(1 - c\). Coefficients are additionally presented in decimal representation to ease comparison.

6 Implementation and Evaluation

Implementation. Our prototype \(\mathsf {ATLAS}\) is an extension of the tool described in [18]. In particular, we rely on the preprocessing steps and the implementation of the weakening rule as reported in [18] (which makes use of Farkas’ Lemma in conjunction with selected mathematical facts about the logarithm as mentioned above). We only use the fully-automated mode reported in [18]. We have adapted the generation of the constraint system to the rules presented in this paper. We rely on Z3 [26] for solving the generated constraints. We use the optimisation heuristics of [18] for steering the solver towards solutions that minimize the resulting expected amortised complexity of the function under analysis.

Evaluation. We present results for the benchmarks described in Sect. 2 (plus a randomised version of splay heaps, the source code can be found in [19]) in Table 1. Table 3 details the computation time for type checking our results. Note that type inference takes considerably longer (tens of hours). To the best of our knowledge this is the first time that an expected amortised cost could be inferred for these data structures.

By comparing the costs of the operations of randomised splay trees and heaps to the costs of their deterministic versions (see Table 1), one can see the randomised variants have equal or lower complexity in all cases (as noted in Table 2 we have set the costs of the recursive call and the rotation to , such that in the deterministic case, which corresponds to a coin toss with \(p=1\), these costs will always add up to one). Clearly, setting the costs of the recursion to the same value as the cost of the rotation does not need to reflect the relation of the actual costs. A more accurate estimation of the relation of these two costs will likely require careful experimentation with data structure implementations, which we consider orthogonal to our work. Instead, we report that our analysis is readily adapted to different costs and different coin toss probabilities. We present an evaluation for different values of p, recursion cost c and rotation cost \(1-c\) in Table 2. In preparing Table 2 the template \(q^{*} \cdot \mathsf {rk}(t) + q_{(1, 0)} \cdot \log _2(|t|) + q_{(0, 2)}\) was used for performance reasons. The memory usage according to Z3’s “max memory” statistic was 7129MiB per instance. The total runtime was 1H45M, with an average of 11M39S and a median of 2M33S. Two instances took longer time (36M and 49M).

Table 3. Number of assertions, solving time for type checking, and maximum memory usage (in mebibytes) for the combined analysis of functions per-module. The number of functions and lines of code is given for comparison.

Deterministic Benchmarks. For comparison we have also evaluated our tool \(\mathsf {ATLAS}\) on the benchmarks of [18]. All results could be reproduced by our implementation. In fact, for the function it yields an improvement of , ie. compared to . We note that we are able to report better results because we have generalised the resource functions \(p_{\left( a_{1} \ldots \ldots a_{m}, b\right) }\left( t_{1}, \ldots , t_{m}\right) :=\mathrm {log} _{2}\left( a_{1} \cdot \left| t_{1}\right| +\right. \left. \cdots +a_{m} \cdot \left| t_{m}\right| +b\right) \) to also allow negative values for b (under the condition that \(\sum _i a_i + b \ge 1\)) and our generalised \((\mathsf {{let:tree}})\) rule can take advantage of these generalized resource functions (see [19] for a statement of the rule and the proof of its soundness as part of the proof of Theorem 3).

7 Conclusion

In this paper, we present the first fully-automated expected amortised cost analysis of self-adjusting data structures, that is, of randomised splay trees, randomised splay heaps and randomised meldable heaps, which so far have only (semi-)manually been analysed in the literature.

In future work, we envision to extend our analysis to related probabilistic settings such as skip lists [30], randomised binary search trees [20] and randomised treaps [8]. We note that adaptation of the framework developed in this paper to new benchmarks will likely require to identify new potential functions and the extension of the type-effect-system with typing rules for these potential functions. Further, on more theoretical grounds we want to clarify the connection of the here proposed expected amortised cost analysis with Kaminski’s \(\mathsf {ert}\)-calculus, cf. [15], and study whether the expected cost transformer is conceivable as a potential function.