1 Introduction

The binary search tree is an important data structure in computer science allowing for efficient execution of database operations such as insertion, deletion and retrieving of data. Given a list of elements \(x_1, x_2, \ldots , x_n\) from a totally ordered set, it is the unique labelled rooted binary tree with n nodes constructed by successive insertion of all elements satisfying the following property: for each node in the tree with label (or key), say y, all keys stored in its left (right) subtree are at most equal to (strictly larger than) y. For an illustration, see Fig. 1.

Fig. 1
figure 1

Binary search tree constructed from the list 4, 2, 6, 5, 7, 3, 1

Properties of binary search trees are typically analysed under the random permutation model where the data \(x_1, \ldots , x_n\) are generated by a uniformly chosen permutation of the first n integers. Among the quantities studied in binary search trees, one finds depths of and distances between nodes related to the performance of search queries and finger searches in the database, the (total) path length measuring the cost of constructing the tree as well as the Wiener index. Further, more complex parameters such as the height corresponding to worst case search times, the saturation level and the profile have been studied thoroughly. We review the literature relevant in the context of our work below.

In this note, we complement the wide literature on random binary search trees by the analysis of depths of nodes, path length and Wiener index in their weighted versions as introduced by Aguech et al. [1]. Here, the weighted depth of a node is the sum of all keys stored on the path to the root. In [1], results about weighted depths of extremal paths have been obtained. Kuba and Panholzer [19, 20] studied the problem in random increasing trees covering the random recursive tree and the random plane-oriented recursive tree. Weighted depths of nodes and the weighted height were also studied by Broutin and Devroye [3] in a more general tree model, which relies on assigning weights to the edges of the tree. Further, the weighted path length in this model was investigated by Rüschendorf and Schopp [29]. Note that we deviate from the notation introduced in [1, 19] using the term weighted depth for what is called weighted path length there since we also study a weighted version of the (total) path length of binary search trees.

2 Preliminaries

We introduce some notation. By the size of a finite binary tree, we refer to its number of nodes. Upon embedding a finite rooted binary tree in the complete infinite binary tree, a node is called external if its graph distance to the binary tree is one. Any node on level \(k \ge 1\) in a rooted binary tree is associated a vector \(v_1 v_2 \ldots v_k \in \{0,1\}^k\) where \(v_i = 0\) if and only if the path from the root to the node continues in the left subtree upon reaching level \(i-1\).

Let \(n \ge 1\) and \(1 \le k \le n\). Under the random permutation model (short: permutation model), let \(D_k(n)\) be the depth of the node labelled k. By \(W_k(n)\) we denote the sum of all keys on the path from the root to the node labelled k including the labels of both endpoints. For \(x = x_1 x_2 \ldots \in \{0,1\}^\infty \), let \(B_n(x)\) be the maximal depth among nodes of the form \(x_1 \ldots x_k, k \ge 0\). We use \(X_n\) (\({\mathbb {X}}_n\)) to denote the (weighted) depth of the nth inserted node. Finally, we define the height of the tree by \(H_n = \sup \{k \in \mathbb {N}: D_k(n) > 0\}\).

Throughout the paper, we denote by \(\mathscr {L}(X)\) the distribution of a random variable X. For real-valued X with finite second moment, we write \(\sigma _X\) for its standard deviation. By \(\mathscr {N}\) we denote a random variable with the standard normal distribution, and by \(\mu \) the Dickman distribution on \([0, \infty )\) characterized by its Fourier transform,

$$\begin{aligned} \int e^{i \lambda x} d \mu (x) = \exp \left( \int _0^1 \frac{e^{i \lambda x}-1}{x} dx\right) , \quad \lambda \in \mathbb {R}. \end{aligned}$$
(1)

The origins of the Dickman distribution go back to Dickman’s [10] classical result on large prime divisors. Compare Hildebrandt and Tenenbaum [15] for a survey on the problem. In the probabilistic analysis of algorithm, \(\mu \) first arose in Hwang and Tsai’s [17] study of the complexity of Hoare’s selection algorithm. We refer to this work for a discussion of more details on the distribution, historical background and further references.

Finally, we use the Landau notations little–o, big–O, little–\(\omega \), big–\(\varOmega \) and big–\(\varTheta \) as \(n \rightarrow \infty \).

2.1 Depths and Height

We recall the following fundamental property of random binary search trees going back to Devroye [6]: in probability and with respect to all moments, we have

$$\begin{aligned} \frac{H_n}{\log n} \rightarrow c^*, \end{aligned}$$
(2)

where \(c^* = 4.31 \ldots \) is the larger of the two solutions to the transcendent equation \(e = (\frac{2e}{x} )^x\). Next, by classical results due to Brown and Shubert [4] and Devroye [7], for any \(x \in \{0,1\}^\infty \), in distribution,

$$\begin{aligned} \frac{B_n(x) - \log n}{\sqrt{\log n}} \rightarrow \mathscr {N}, \quad \frac{X_n - 2 \log n}{\sqrt{ 2\log n}} \rightarrow \mathscr {N}. \end{aligned}$$
(3)

(In [7, Theorem O1], the first convergence in the last display is formulated for \(x = {\mathbf {0}} := 00\ldots \) The general case follows, since, by symmetry, \({{\mathscr {L}}}( B_n(x)) = {{\mathscr {L}}}( B_n ({\mathbf {0}})) \) for all x. The second convergence was also claimed in a footnote by Mahmoud and Pittel [23].) Grübel [13] studied the process \(\{B_n(x): x \in \{0, 1 \}^\infty \} \), the so-called silhouette, thereby obtaining a functional limit theorem for its integrated version. The asymptotic behaviour of depths of nodes with given labels has been analysed by Devroye and Neininger [9]: uniformly in \(1\le k \le n\) and as \(n \rightarrow \infty \),

$$\begin{aligned} \mathbf {E} \left[ D_k(n) \right] = \log (k(n-k)) + O(1), \quad \text {Var}(D_k(n)) = \log (k(n-k)) + O(1). \end{aligned}$$
(4)

Moreover, for any \(1 \le k \le n\), which may depend on n, in distribution

$$\begin{aligned} \frac{D_k(n) - \mathbf {E} \left[ D_k(n) \right] }{\sigma _{D_k(n)}} \rightarrow \mathscr {N}. \end{aligned}$$
(5)

Here, one should also compare Grübel and Stefanoski [14] for stronger results in the context of the corresponding Poisson approximation. For a survey on depths and distances in binary search trees, we refer to Mahmoud’s book [21]. Finally, the asymptotic behaviour of the weighted depths of the nodes associated with the vectors \({\mathbf {0}}\) and \({\mathbf {1}} := 11\ldots \) denoted by \({\mathscr {L}}_n\) and \({\mathscr {R}}_n\) (\({{\mathscr {L}}}\) and \({\mathscr {R}}\) stand for left and right) were studied in [1]. In distribution,

$$\begin{aligned} \frac{{\mathscr {L}}_n}{n} \rightarrow \mathscr {Y}, \quad \frac{{\mathscr {R}}_n - n B_n(\mathbf {1})}{n \sqrt{\log n}} \rightarrow 0, \end{aligned}$$
(6)

where \(\mathscr {Y}\) has the Dickman distribution. The first convergence is closely related to the limit law in Theorem 3.1 in [17].

2.2 Path Length and Wiener Index

In a rooted tree, the path length is defined as the sum over all depths of nodes. Moreover, the Wiener index is obtained by summing all distances of unordered pairs of vertices. For a random binary search tree of size n, we denote its path length by \(P_n\) and its Wiener index by \(W_n\). Denoting by \(\gamma \) the Euler–Mascheroni constant, we have

$$\begin{aligned} \mathbf {E} \left[ P_n \right] = 2n \log n + (2\gamma -4 )n + o(n), \quad \text {Var}(P_n) = \frac{21 - 2 \pi ^2}{3} n^2 + o(n^2), \end{aligned}$$
(7)

going back to Hoare [16] and Knuth [18]. Further, by [25],

$$\begin{aligned} \mathbf {E} \left[ W_n \right] = 2n^2 \log n + (2\gamma -6)n^2 + o(n^2), \quad \text {Var}(W_n) = \frac{20 - 2 \pi ^2}{3} n^4 + o(n^4). \end{aligned}$$
(8)

Central limit theorems for the path length go back to Régnier [27] and Rösler [28], for the Wiener index to Neininger [25]. More precisely, by [25, Theorem 1.1], there exists a non-trivial random variable \(Z^*\) on \(\mathbb {R}^2\) characterized by a stochastic fixed-point equation, such that, in distribution,

$$\begin{aligned} \left( \frac{W_n-{\mathbb {E}}[W_n]}{n^2},\frac{P_n-{\mathbb {E}}[P_n]}{n}\right) \rightarrow Z^*. \end{aligned}$$
(9)

2.3 The i.i.d Model

We also consider binary search trees of size n where the data are chosen as the first n values of a sequence of independent random variables \(U_1, U_2, \ldots \) each having the uniform distribution on [0, 1]. Since the vector \((\text {rank}(U_1), \ldots , \text {rank}(U_n))\) constitutes a uniformly chosen permutation, in distribution, both the permutation model and the i.i.d. model lead to the same unlabelled tree. We use the same notation as in the permutation model for quantities not involving the labels of nodes, that is, \(X_n, h_n, H_n, P_n, W_n\) and \(B_n(x)\). Further, we define the weighted path length \(\mathscr {P}_n\) as the sum of all weighted depths, and the weighted Wiener index \(\mathscr {W}_n\) as the sum over all pairs of weighted distances. Here, the weighted distance between two nodes equals the sum of all labels on the path connecting them, labels of endpoints included. (Notice that the weighted distance between a node and itself is equal to its label.) Finally, analogously to \(B_n(x)\), we define \({\mathscr {B}}_n(x)\) as the weighted depth of the node of largest depth on the path x. We call \(\{{\mathscr {B}}_n(x): x \in \{0,1 \}^\infty \}\), the weighted silhouette of the tree (at time n).

3 Main Results

Our main results are divided into two groups: Theorems 1 and 2 hold in the permutation model, while Theorems 3 and 4 are formulated in the i.i.d. model.

3.1 Results in the Permutation Model

We start with the expansions of the first two moments of the weighted depth \(W_k(n)\). Uniformly in \(1 \le k \le n\), as \(n \rightarrow \infty \),

$$\begin{aligned}&\mathbf {E} \left[ W_k(n) \right] = k \log (k(n-k+1)) + n + O(k + \log n), \end{aligned}$$
(10)
$$\begin{aligned}&\text {Var}(W_k(n)) = k^2 \log (k(n-k+1)) + \frac{n^2}{2} + O(kn). \end{aligned}$$
(11)

It turns out that the asymptotic distributional behaviour of \(W_k(n)\) with respect to terms of second order is entirely described by that of \(k D_k(n)\) if and only if \(k = \omega (n / \sqrt{\log n})\). Accordingly, in the remainder of this paper, we call nodes with labels of order \(\omega (n / \sqrt{\log n})\) large and of order \(O(n / \sqrt{\log n})\) small.

Theorem 1

(Weighted depths of large nodes) For \(k = \omega (n / \sqrt{\log n})\),

$$\begin{aligned} \mathbf {E} \left[ | W_k(n) - k D_k(n)| \right] = o(\sigma _{k D_k(n)}). \end{aligned}$$
(12)

In particular, for \(0< \alpha < 1\) and \(|k/n - \alpha | = o((\log n)^{-1/2})\), in distribution,

$$\begin{aligned} \left( \frac{D_k(n) - 2 \log n}{\sqrt{2 \log n}}, \frac{W_{k}(n) - 2\alpha n \log n}{\alpha n \sqrt{2 \log n}} \right) \rightarrow (\mathscr {N},\mathscr {N}). \end{aligned}$$
(13)

For the last inserted node, in distribution,

$$\begin{aligned} \left( \frac{X_n - 2 \log n}{\sqrt{2 \log n}}, \frac{{\mathbb {X}}_n}{2n \log n} \right) \rightarrow \left( \mathscr {N}, \xi \right) , \end{aligned}$$
(14)

where \(\mathscr {N}\) and \(\xi \) are independent and \(\xi \) is uniformly distributed on [0, 1].

The asymptotic behaviour of weighted depths of small nodes is to be compared with the corresponding results in [19]. Here, another phase transition occurs when \(k = o(n / \sqrt{\log n})\).

Theorem 2

(Weighted depths of small nodes) Let \(k = O(n/ \sqrt{\log n})\). Then, in distribution,

$$\begin{aligned} \left( \frac{D_k(n) - \mathbf {E} \left[ D_k(n) \right] }{\sigma _{D_k(n)}}, \frac{W_k(n) - k D_k(n)}{n} \right) \rightarrow (\mathscr {N}, \mathscr {Y}), \end{aligned}$$
(15)

where \(\mathscr {N}\) and \(\mathscr {Y}\) are independent and \(\mathscr {Y}\) has the Dickman distribution. Thus, if \(k \sqrt{\log n}/n \rightarrow \beta \ge 0\), in distribution,

$$\begin{aligned} \left( \frac{D_k(n) - 2 \log n }{\sqrt{2 \log n}}, \frac{W_{k}(n) - \mathbf {E} \left[ W_k(n) \right] }{n} \right) \rightarrow (\mathscr {N}, \mathscr {Y}+ \sqrt{2} \beta \mathscr {N}-1). \end{aligned}$$

In particular, if \(|k \sqrt{\log n}/n - \beta | = o((\log n)^{-1/2})\) with \(\beta > 0\), then, in distribution,

$$\begin{aligned} \left( \frac{D_k(n) - 2\log n}{\sqrt{2 \log n}}, \frac{W_{k}(n) - 2 \beta n \sqrt{\log n}}{n} \right) \rightarrow (\mathscr {N}, \mathscr {Y}+ \sqrt{2}\beta \mathscr {N}). \end{aligned}$$

3.2 Results in the i.i.d Model

Any \(x \in \{0,1\}^\infty \) corresponds to a unique value \(x \in [0,1]\) by \(x = \sum _{i=0}^\infty x_i 2^{-i}\). This identification becomes one-to-one upon allowing only those \(x \in \{0,1\}^\infty \) which contain infinitely many zeros and \(x = \mathbf {1}\). In the i.i.d. model, for any \(x \in \{0,1\}^{\infty }, k \ge 1\), the node \(x_1 \ldots x_k\) eventually appears in the sequence of binary search trees and we write \(\varXi _k(x)\) for its ultimate label. The following theorem about the behaviour of \({\mathscr {B}}_n(x)\) involves a random continuous distribution function arising as the almost sure limit of \(\varXi _k(x), x \in [0,1],\) as \(k \rightarrow \infty \). We believe that this process is of independent interest and state some of its properties in Proposition 1 in Sect. 3.3. The simulations of \(\varXi _{15}\) presented in Fig. 2 illustrate the scaling limit.

Theorem 3

(Weighted silhouette) There exists a random continuous and strictly increasing bijection \(\varXi (x), x \in [0,1]\), such that, almost surely, uniformly on the unit interval, \(\varXi _k(x) \rightarrow \varXi (x)\). For any \(x \in [0,1]\), in probability,

$$\begin{aligned} \frac{{\mathscr {B}}_n(x)}{\log n} \rightarrow \varXi (x). \end{aligned}$$
(16)

Also, for any \(m \ge 1\), in probability

$$\begin{aligned} \int _0^1 \left| \frac{{\mathscr {B}}_n(x)}{\log n} - \varXi (x) \right| ^m dx \rightarrow 0. \end{aligned}$$
(17)

Further, in probability,

$$\begin{aligned} \sup _{x \in [0,1]} \frac{{\mathscr {B}}_n(x)}{\log n} \rightarrow c^* = 4.31\ldots \end{aligned}$$
(18)

with \(c^*\) as in (2). Finally, for any \(x \in [0,1]\), in distribution,

$$\begin{aligned} \left( \frac{B_n(x) - \log n}{\sqrt{\log n}}, \frac{{\mathscr {B}}_n(x)}{\log n} \right) \rightarrow (\mathscr {N}, \varXi (x)), \end{aligned}$$
(19)

where \(\mathscr {N}\) and \(\varXi (x)\) are independent.

Fig. 2
figure 2

Two simulations of \(\varXi _{15}\), the dotted line being the graph of the identity function

The next theorem extends the distributional convergence result in Theorem 1.1 in [25], that is (9), by central limit theorems for the weighted path length and the weighted Wiener index.

Theorem 4

(Weighted path length and Wiener index) In the i.i.d. model, we have

$$\begin{aligned} \mathbf {E} \left[ \mathscr {P}_n \right] = n \log n + (\gamma - 3/2) n + o(n), \quad \mathbf {E} \left[ \mathscr {W}_n \right] = n^2 \log n + (\gamma -11/4)n^2 + o(n^2), \end{aligned}$$

and

$$\begin{aligned} Var (\mathscr {P}_n) = \frac{65 - 6 \pi ^2}{36} n^2 + o(n^2), \quad Var (\mathscr {W}_n) = \frac{2413 - 240 \pi ^2}{1440} n^4 + o(n^4). \end{aligned}$$

The leading constants in the expansions of the covariances between \(P_n, W_n, \mathscr {P}_n\) and \(\mathscr {W}_n\) are given in (36)–(38). (The leading constant for \(Cov (P_n, W_n)\) was already given in [25].) As \(n \rightarrow \infty \), with convergence in distribution and with respect to the first two moments in \(\mathbb {R}^4\), we have

$$\begin{aligned} \left( \frac{\mathscr {W}_n-{\mathbb {E}}[\mathscr {W}_n]}{n^2},\frac{W_n-{\mathbb {E}}[W_n]}{n^2},\frac{\mathscr {P}_n-{\mathbb {E}}[\mathscr {P}_n]}{n},\frac{P_n-{\mathbb {E}}[P_n]}{n}\right) \rightarrow Z, \end{aligned}$$

where the limiting distribution \(\mathscr {L}(Z)\) is the unique fixed point of the map T in (35).

Conclusions We have seen that there exist three types of nodes showing significantly different behaviour with respect to their weighted depths. By Theorem 1, for \(k = \omega (n/\sqrt{\log n})\), second-order fluctuations of weighted depths are due to variations of the depth of nodes. In the second regime, when \(k = \varTheta (n/\sqrt{\log n})\), variations of weighted depths are determined by two independent contributions, one for the depths and one for the keys on the paths. Finally, when \(k = o(n/\sqrt{\log n})\) only fluctuations of labels on paths influence second-order terms of weighted depths. The third regime can be further subdivided with respect to the first-order terms of \(W_k(n)\) and \(k D_k(n)\): for \(k = \omega (n / \log n)\), they coincide, for \(k = \varTheta (n/\log n)\), they are of the same magnitude, whereas, for \(k = o(n/ \log n)\), they are of different scale. By Theorem 3, the weighted silhouette behaves considerably different. Here, the lack of concentration around the mean leads to an interesting random distribution function on the unit interval as scaling limit.

3.3 Further Results and Remarks

Model comparison We decided to present Theorems 3 and 4 in the i.i.d. model rather than in the permutation model since this allows for a stronger mode of convergence in (16), (17) and a clearer presentation of the proof of Theorem 4. In the i.i.d. model, denoting by \({\mathscr {W}}_{(k)}(n)\) the weighted depth of the node of rank k among the first n inserted keys, Theorems 1 and 2 remain valid upon replacing \(W_k(n)\) by \(n {\mathscr {W}}_{(k)}(n)\). Similarly, Theorems 3 and 4 hold in the permutation model where weighted depths and the weighted path length are to be scaled down by a factor n and the weighted Wiener index by a factor \(n^2\). The convergences in (16) and (17) then only hold in distribution. This can be deduced most easily from the following coupling of the two models: starting with the binary search tree in the i.i.d. model, also consider the random binary search tree in the permutation model relying on the permutation \((\text {rank}(U_1), \ldots , \text {rank}(U_n))\). Then, for all \(1 \le k \le n\),

$$\begin{aligned} \left| {\mathscr {W}}_{(k)}(n) - \frac{W_k(n)}{n} \right| \le H_n \max _{1 \le i \le n} \left| U_i - \frac{\text {rank}(U_i)}{n} \right| . \end{aligned}$$
(20)

It is well known that the second factor on right-hand side grows like \(n^{-1/2}\), compare, e.g. Donsker’s theorem for empirical distribution functions or the Dvoretzky–Kiefer–Wolfowitz inequality [11]. Combining this, (20) and (2) is sufficient to transfer all results in Sect. 3 between the two models.

The depth first search process In the permutation model, let \(v_1, \ldots , v_{n+1}\) be the external nodes as discovered by the depth first search process from left to right. By \(D^*_k(n)\) and \(W^*_k(n), 1 \le k \le n+1\), we denote depth and weighted depth of the external node \(v_k\). Then, at the end of Sect. 4.1, we show that, uniformly in \(1 \le k \le n\),

$$\begin{aligned} \mathbf {E} \left[ |D_k(n) - D^*_k(n)|^2 \right] = o(\log n), \quad \mathbf {E} \left[ |W_k(n) - W^*_k(n)|^2 \right] = o(\text {Var}(W_k(n))). \end{aligned}$$
(21)

Thus, the results in Theorems 1 and 2 also cover the second-order analysis of the sequences \(D_k^*(n)\) and \(W_k^*(n)\).

Weighted distances In the permutation model, let \(D_{k,\ell }(n)\) be the graph distance between the nodes labelled \(1 \le k \le \ell \le n\) and \(W_{k,\ell }(n)\) be the sum of all labels on the path from k to \(\ell \), labels at the endpoints included. Asymptotic normality for the sequence \((D_{k,\ell }(n))\) (after rescaling) under the optimal condition \(\ell - k \rightarrow \infty \) has been obtained in [9]. For uniformly chosen nodes, distributional convergence results date back to Mahmoud and Neininger [22] and Panholzer and Prodinger [26]. Analogously to Theorem 1, it is straightforward to prove central limit theorems jointly for weighted and non-weighted distances. We only state the results. If \(\ell - k = \varOmega (n)\) and \(k = \omega (n / \sqrt{\log n})\), then

$$\begin{aligned} \mathbf {E} \left[ |W_{k,\ell }(n) - kD_k(n) - \ell D_\ell (n)| \right] = \sigma _{\ell D_\ell (n)}. \end{aligned}$$

In particular, for \(0< s< t < 1\) and \(|k/n - s| = o((\log n)^{-1/2}), |\ell /n - t| = o((\log n)^{-1/2})\), we have, in distribution,

$$\begin{aligned}&\Bigg (\frac{D_k(n) - 2 \log n}{\sqrt{2 \log n}}, \frac{D_\ell (n) - 2 \log n}{\sqrt{2 \log n}}, \frac{D_{k,\ell }(n) - 4 \log n}{\sqrt{4 \log n}}, \frac{W_{k,\ell }(n) - 2 (s+t) n \log n}{n \sqrt{2 \log n}}\Bigg ) \nonumber \\&\quad \rightarrow \left( \mathscr {N}_1, \mathscr {N}_2, \frac{\mathscr {N}_1 + \mathscr {N}_2}{\sqrt{2}}, s \mathscr {N}_1 + t \mathscr {N}_2\right) . \end{aligned}$$

Here, \(\mathscr {N}_1, \mathscr {N}_2\) are independent random variables both with the standard normal distribution.

The limit process \(\varXi \) The process \(\varXi \) in Theorem 3 is a random distribution function. In particular, it can be regarded as an element in the set of càdlàg functions \({\mathscr {D}}[0,1]\) consisting of all \(f: [0,1] \rightarrow \mathbb {R}\), such that, for all \(t \in [0,1]\), \(f(t) = \lim _{s \downarrow t} f(s)\) and \(\lim _{s \uparrow t} f(s)\) exists. The absolute value of f is defined by \(\sup \{|f(t)| : t \in [0,1]\}\). Endowed with Skorokhod’s topology \(J_1\), \({\mathscr {D}}[0,1]\) becomes a Polish space. We refer to Chapter 3 in Billingsley’s book [2] for detailed information on this matter.

Proposition 1

(Properties of \(\varXi \)) The process \(\varXi \) is unique (in distribution) among all càdlàg processes with finite absolute second moment satisfying

$$\begin{aligned} \mathscr {L}((\varXi (t))_{t \in [0,1]})&= \mathscr {L}\left( \left( \mathbf {1}_{ [0,1/2) }(t) U \varXi (2t)\right. \right. \nonumber \\&\quad \left. \left. + \, \mathbf {1}_{ [1/2,1) }(t) \left( (1-U) \varXi '(2t-1) + U \right) \right) _{t \in [0,1]}\right) . \end{aligned}$$
(22)

Here, \(\varXi , \varXi ', U\) are independent, U has the uniform distribution on [0, 1], and \(\varXi '\) is distributed like \(\varXi \). We have

  1. (i)

    \(\mathbf {E} \left[ \varXi (t) \right] = t\) for all \(t \in (0,1)\);

  2. (ii)

    \(\mathscr {L}( (\varXi (t))_{t \in [0,1]} ) = \mathscr {L}( (1 - \varXi (1-t))_{t \in [0,1]})\);

  3. (iii)

    \(\varXi (\xi )\) has the arcsine distribution with density

    $$\begin{aligned} \frac{1}{\pi \sqrt{x(1-x)}}, \quad x \in (0,1), \end{aligned}$$

    where \(\varXi , \xi \) are independent and \(\xi \) has the uniform distribution on [0, 1];

  4. (iv)

    for \(t \in (0,1)\), \(\mathscr {L}(\varXi (t))\) has a smooth density \(f_t: (0,1) \rightarrow (0, \infty )\);

  5. (v)

    for \(t \in (0,1/2)\), \(x f'_t(x) = - f_{2t}(x)\), \(x \in (0,1)\), \(f_t\) is strictly monotonically decreasing and \(\lim _{x \uparrow 1} f_t(x) = 0\);

  6. (vi)

    with \(\alpha ^{(i)}_t := \lim _{x \downarrow 0} f^{(i)}_t(x)\), \(i = 0,1, t \in (0,1/2)\) and \(\gamma _0 = 1/4, \gamma _1 = 5/16\), we have \(\alpha ^{(i)}_t = (-1)^i \infty \) for \(0 < t \le \gamma _i\), \(|\alpha ^{(i)}_t| < \infty \) for \(\gamma _i< t < 1/2\) and \(|\alpha ^{(i)}_t| \uparrow \infty \) as \(t \downarrow \gamma _i\).

Random recursive trees A random recursive tree is constructed as follows: starting with the root labelled one, in the kth step, \(k \ge 2\), a node labelled k is inserted in the tree and connected to an already existing node chosen uniformly at random. Weighted depths in random binary search trees differ substantially from those in random recursive trees analysed in [19] where all nodes show an asymptotic behaviour comparable to that of nodes labelled \(k = o(n/\sqrt{\log n})\) in the binary search tree. The difference is highlighted by the weighted path length. Being of the same order as the path length in binary search trees, it follows from results in [19] that the weighted path length \({\mathscr {Q}}_n\) in a random recursive tree of size n is of order \(n^2\). The same is valid for its standard deviation. We conjecture that the sequence \((n^{-2} {\mathscr {Q}}_n)\) converges in distribution to a non-trivial limit; however, the recursive approach worked out in the proof of Theorem 4, which also applies to the analysis of the path length in random recursive trees, seems not to be fruitful in this context.

Outline All results are proved in Sect. 4 starting with the proofs of Theorems 1 and 2 as well as (21) in Sect. 4.1. Here, most arguments are based on representations of (weighted) depths as sums of bounded independent random variables which go back to Devroye and Neininger [9]. Theorem 3 and Proposition 1 are proved in Sect. 4.2. In this part, the construction of the limiting process relies on suitable uniform \(L_1\)-bounds on the increments of the process \(\varXi _k(x)_{x \in [0,1]}, k \ge 1,\) while the properties of the limit laws formulated in Proposition 1 follow from the distributional fixed-point equation (22). Finally, the proof of Theorem 4 relying on the contraction method is worked out in Sect. 4.3.

4 Proofs

4.1 Weighted Depths of Labelled Nodes

In the permutation model, let \(A_{j,k}\) be the event that the node labelled k is in the subtree of the node labelled j. Then, \(D_k(n) = \sum _{j=1}^n \mathbf {1}_{ A_{j,k} } -1 \) and \(W_k(n) = \sum _{j=1}^n j \mathbf {1}_{ A_{j,k} }\). It is easy to see that \(A_{1, k}, \ldots , A_{k-1,k}\) and \(A_{k+1,k}, \ldots , A_{n,k}\) are two families of independent events; however, there exist subtle dependencies between the sets. Following the approach in [9], let \(B_{j,k} = A_{j,k-1}\) for \(j < k\) and \(B_{j,k} = A_{j,k+1}\) for \(j > k\). For convenience, let \(B_{k,k}\) be an almost sure event. The following lemma summarizes results in [9], and we refer to this paper for a proof. In this context, note that Devroye [8] gives distributional representations as sums of independent (or m-dependent) indicator variables for quantities growing linearly in n, such as the number of leaves.

Lemma 1

Let \(1 \le k \le n\). Then, the events \(B_{j,k}, j = 1, \ldots , n\), are independent. For \(j \ne k\), we have

$$\begin{aligned} \mathbb {P} \left( A_{j,k} \right) = \frac{1}{|k-j| + 1}, \quad \mathbb {P} \left( B_{j,k} \right) = \frac{1}{|k-j|}. \end{aligned}$$

From the lemma, it follows that

$$\begin{aligned}&\mathbf {E} \left[ \sum _{j = 1}^n \mathbf {1}_{ B_{j,k} \backslash A_{j,k} } \right] \le 2, \quad \text {and} \quad \mathbf {E} \left[ \sum _{j = 1}^n j \mathbf {1}_{ B_{j,k} \backslash A_{j,k} } \right] \le 2k + \log n. \end{aligned}$$

The ideas in [9] can also be used to analyse second (mixed) moments. Straightforward calculations show the following bounds:

$$\begin{aligned}&\mathbf {E} \left[ \sum _{i,j = 1}^n \mathbf {1}_{ B_{j,k} } \mathbf {1}_{ B_{i,k} \backslash A_{i,k} } \right] = O(1), \quad \text {and}\\&\mathbf {E} \left[ \sum _{i,j = 1}^n i j \mathbf {1}_{ B_{j,k} } \mathbf {1}_{ B_{i,k} \backslash A_{i,k} } \right] = O(k^2 + k(\log n)^2). \end{aligned}$$

Here, both O-terms are uniform in \(1 \le k \le n\). Define \(\bar{D}_k(n) = \sum _{j=1}^n \mathbf {1}_{ B_{j,k} }-1\) and \({\bar{W}}_k(n) = \sum _{j=1}^n j \mathbf {1}_{ B_{j,k} }\). We make the following observation:

  • O: The asymptotic statements in (10), (11), Theorem 1 and Theorem 2 are correct if and only if they are correct upon replacing \(D_k(n)\) by \(\bar{D}_k(n)\) and/or \(W_k(n)\) by \({\bar{W}}_k(n)\).

For \(i=1,2, n \ge 0\) and \(1 \le k \le n\), set \(H^{(i)}_{n} := \sum _{j=1}^n j^{-i}\) and \(H^{(i)}_{k,n} := H^{(i)}_{k-1} + H^{(i)}_{n-k}\). Using Lemma 1, one easily computes

$$\begin{aligned} \mathbf {E} \left[ {\bar{W}}_k(n) \right]&= k (H_{k,n}^{(1)}-1) + n + 1 ,\\ \text {Var}({\bar{W}}_k(n))&= k^2 \left( H_{k,n}^{(1)} - H_{k,n}^{(2)}-3\right) + \frac{n^2}{2} + kn + 2k \left( H^{(1)}_{k-1} - H^{(1)}_{n-k}\right) - \frac{n}{2} + k + 1. \end{aligned}$$

As \(H^{(1)}_{n} = \log (n+1) + O(1)\) and \(H^{(2)}_n = O(1)\), both expansions (10) and (11) follow from observation O.

4.1.1 Weighted Depths of Large Nodes

We prove Theorem 1. First, (12) follows from (4) and

$$\begin{aligned} \mathbf {E} \left[ \left| k D_k(n) - W_k(n) \right| \right] \le k + \sum _{j=1}^n |k-j| \mathbb {P} \left( A_{j,k} \right) \le k+n. \end{aligned}$$
(23)

For \(k = \omega (n/\sqrt{\log n})\), combining (4), (5) and (10), in distribution,

$$\begin{aligned} \left( \frac{D_k(n) - \mathbf {E} \left[ D_k(n) \right] }{\sigma _{ D_k(n)}}, \frac{W_k(n) - \mathbf {E} \left[ W_k(n) \right] }{\sigma _{W_k(n)}} \right) \rightarrow (\mathscr {N}, \mathscr {N}). \end{aligned}$$

From here, statement (13) follows from (4) and (10).

Considering the last inserted node with value \(Y_n\), note that, conditionally on \(Y_n = k\), the correlations between the events \(A_{j,k}, j < k\) and \(A_{j,k}, j > k\) vanish. More precisely, given \(Y_n = k\), the family \(\{ \mathbf {1}_{ A_{j,k} }, j = 1, \ldots , n\}\) is distributed like a family of independent Bernoulli random variables \(\{V_{j,k}: j = 1, \ldots , n\}\) with \(\mathbb {P} \left( V_{j,k}=1 \right) = |k-j|^{-1}\) for \(j \ne k\) and \(\mathbb {P} \left( V_{k,k}=1 \right) = 1\). Thus,

$$\begin{aligned} \mathbf {E} \left[ |Y_n (X_n+1) - {\mathbb {X}}_n| \right]&\le \frac{1}{n} \sum _{k=1}^n \mathbf {E} \left[ \sum _{j=1}^n |k-j| \mathbf {1}_{ A_{j,k} } \Bigg | Y_n = k \right] \\&= \frac{1}{n} \sum _{k=1}^n\mathbf {E} \left[ \sum _{j=1}^n |k-j| V_{j,k} \right] \le n. \end{aligned}$$

By (3), we have \(X_n / \log n \rightarrow 2\) in probability. Hence, in order to prove (14), it suffices to show that, in distribution,

$$\begin{aligned} \left( \frac{X_n - 2 \log n}{\sqrt{2 \log n}}, \frac{Y_n}{n} \right) \rightarrow \left( \mathscr {N}, \xi \right) . \end{aligned}$$
(24)

For a sequence \((k_n)\) satisfying \(sn \le k_n \le tn\) for \(0< s< t < 1\), let us condition on the event \(Y_n = k_n\). Then, by the central limit theorem for triangular arrays of row-wise independent uniformly bounded random variables with diverging variance applied to \(V_{j, k_n}, j =1, \ldots , n\), in distribution,

$$\begin{aligned} \frac{X_n - 2 \log n}{\sqrt{2 \log n}} \rightarrow \mathscr {N}. \end{aligned}$$

Hence, (24) follows from an application of the theorem of dominated convergence noting that \(Y_n\) is uniformly distributed on \(\{1, \ldots , n\}\).

4.1.2 Weighted Depths of Small Nodes

We prove Theorem 2. Let \({\bar{D}}^>_k(n) = \sum _{j = k+1}^n \mathbf {1}_{ B_{j,k} }\) and \({\bar{W}}^>_k(n) = \sum _{j = k+1}^n j \mathbf {1}_{ B_{j,k} }\). Since \(k = O(n/ \sqrt{\log n})\), the same calculation as in (23) shows that,

$$\begin{aligned} \frac{\mathbf {E} \left[ | {\bar{W}}_k(n) - {\bar{W}}^>_k(n) - k ({\bar{D}}_k(n) - {\bar{D}}^>_k(n) )| \right] }{n} \le \frac{k}{n} \rightarrow 0, \quad n \rightarrow \infty . \end{aligned}$$
(25)

For \(\lambda , \mu \in \mathbb {R}\), we have

$$\begin{aligned} \log&\mathbf {E} \left[ \exp \left( i \lambda \left( {\bar{D}}^>_k(n) - \log n\right) / \sqrt{\log n} + i \mu \left( {\bar{W}}^>_k(n) - k {\bar{D}}^>_k(n)\right) /n\right) \right] \\&= - i\lambda \sqrt{\log n} + \log \mathbf {E} \left[ \exp \left( i \sum _{j = k+1}^n \left( \frac{\lambda }{\sqrt{\log n}} + \mu \frac{j -k}{n}\right) B_{j,k} \right) \right] \\&= - i\lambda \sqrt{\log n} + \sum _{j=k+1}^n \log \left( 1 + \frac{ \exp \left( i \left( \frac{\lambda }{\sqrt{\log n}} + \mu \frac{j -k}{n} \right) \right) -1}{j-k} \right) . \end{aligned}$$

By a standard Taylor expansion, the last display equals

$$\begin{aligned}&-i \lambda \sqrt{\log n} + \sum _{j=k+1}^n \frac{ \exp \left( i \left( \frac{\lambda }{\sqrt{\log n}} + \mu \frac{j -k}{n} \right) \right) -1}{j-k} + o(1) \\&\quad = - i\lambda \sqrt{\log n} + \sum _{j=k+1}^n \frac{ \exp \left( i \mu \frac{j -k}{n} \right) \left( 1 + \frac{i \lambda }{\sqrt{\log n}} - \frac{\lambda ^2}{2\log n} \right) -1}{j-k} + o(1) \\&\quad = - \lambda ^2 /2 + \left( 1 + \frac{i \lambda }{\sqrt{\log n}} - \frac{\lambda ^2}{2 \log n} \right) \sum _{j=0}^{n-1} \frac{ \exp \left( i \mu \frac{j+1}{n} \right) -1}{j+1} + o(1) \\&\quad = - \lambda ^2/2 + \int _0^1 \frac{e^{i \mu x}-1}{x} dx + o(1). \end{aligned}$$

Here, in the last step, we have used that the sum on the right-hand side is a Riemann sum over the unit interval whose mesh size \(n^{-1}\) tends to zero. Thus, using the notation of the theorem, (1) and Lévy’s continuity theorem, in distribution,

$$\begin{aligned} \left( \frac{{\bar{D}}^>_k(n) - \log n}{\sqrt{\log n}}, \frac{{\bar{W}}^>_k(n) - k {\bar{D}}^>_k(n)}{n}\right) \rightarrow (\mathscr {N}, \mathscr {Y}). \end{aligned}$$
(26)

In order to deduce (15) note that, by Lemma 1, \({\bar{D}}_k(n) - {\bar{D}}^>_k(n)\) and \(({\bar{D}}^>_k(n), {\bar{W}}^>_k(n))\) are independent while

$$\begin{aligned} \frac{{\bar{D}}_k(n) - {\bar{D}}^>_k(n) - \mathbf {E} \left[ {\bar{D}}_k(n) - {\bar{D}}^>_k(n) \right] }{\sigma _{{\bar{D}}_k(n) - {\bar{D}}^>_k(n)}} \rightarrow \mathscr {N}, \end{aligned}$$

in distribution if and only if \(k \rightarrow \infty \) using the central limit theorem for sums of independent and uniformly bounded random variables. Since

$$\begin{aligned} \frac{{\bar{D}}_k(n) - \mathbf {E} \left[ {\bar{D}}_k(n) \right] }{\sigma _{{\bar{D}}_k(n)}}&= \frac{{\bar{D}}^>_k(n) - \mathbf {E} \left[ D^>_k(n) \right] }{\sqrt{\log n}} \frac{\sqrt{\log n}}{\sigma _{{\bar{D}}_k(n)}} \\&\quad \; + \frac{{\bar{D}}_k(n) - {\bar{D}}^>_k(n) - \mathbf {E} \left[ {\bar{D}}_k(n) - {\bar{D}}^>_k(n) \right] }{\sigma _{{\bar{D}}_k(n) - {\bar{D}}^>_k(n)}} \frac{\sigma _{{\bar{D}}_k(n) - {\bar{D}}^>_k(n)}}{\sigma _{{\bar{D}}_k(n)}}, \end{aligned}$$

we deduce

$$\begin{aligned} \left( \frac{{\bar{D}}_k(n) - \mathbf {E} \left[ {\bar{D}}_k(n) \right] }{\sigma _{{\bar{D}}_k(n)}}, \frac{{\bar{W}}^>_k(n) - k {\bar{D}}^>_k(n)}{n}\right) \rightarrow (\mathscr {N}, \mathscr {Y}), \end{aligned}$$

from (26) upon treating the cases \(k = O(1)\) and \(k = \omega (1)\) separately. From here, the assertion (15) follows with the help of (25) and observation O.

4.1.3 Proof of (21)

The main observation is that the \(k\hbox {th}\) external node visited by the depth first search process is always contained in the subtree rooted at the node labelled k. This can be proved by induction exploiting the decomposition of the tree at the root. Thus, denoting by \(H_k(n)\) the height of the subtree rooted at the node labelled k, we have

$$\begin{aligned} D_k(n)&\le D_k^*(n) \le D_k(n) + H_k(n), \\ W_k(n)&\le W_k^*(n) \le W_k(n) + M_k(n) H_k(n). \end{aligned}$$

Here, \(M_k(n)\) stands for the largest label in the subtree rooted at the node labelled k. Let \(T_k(n)\) be the size of the subtree rooted at k. Then \(T_k(n) = 1 + T^{<}_k(n) + T^{>}_k(n)\) where \(T^{<}_k(n)\) denotes the number of elements in the subtree rooted at k with values smaller than k. By Lemma 1, for \(\ell \le n - k\), we have \(\mathbb {P} \left( T^{>}_k(n) \ge \ell \right) = \mathbb {P} \left( A_{k,k+\ell } \right) = 1/(\ell +1)\). Using the same arguments for the quantity \(T^{<}_k(n)\), we deduce that, uniformly in \(1 \le k \le n\),

$$\begin{aligned} \mathbf {E} \left[ T_k(n) \right] = \varTheta (\log n), \quad \mathbf {E} \left[ (T_k(n))^2 \right] = \varTheta (n^{1/2}), \quad \mathbf {E} \left[ (\log T_k(n))^2 \right] = O(1). \end{aligned}$$

Thus, by an application of (2), for some \(C_1 > 0\),

$$\begin{aligned} \mathbf {E} \left[ |D_k(n) - D_k^*(n)|^2 \right] \le \mathbf {E} \left[ (H_k(n))^2 \right]&\le C_1 \mathbf {E} \left[ (\log T_k(n))^2 \right] = O(1). \end{aligned}$$

By the same arguments, for some \(C_2 > 0\), we have

$$\begin{aligned} \mathbf {E} \left[ |W_k(n) - W_k^*(n)|^2 \right]&\le \mathbf {E} \left[ (M_k(n) H_k(n))^2 \right] \le \mathbf {E} \left[ (k + T_k(n))^2 (H_k(n))^2 \right] \\&\le C_2 k^2 + C_1\left( 2 k \mathbf {E} \left[ T_k(n) (\log T_k(n))^2 \right] \right. \\&\left. \quad +\, \mathbf {E} \left[ (T_k(n))^2 (\log T_k(n))^2 \right] \right) \\&= O(k^2 + (\log n)^{2} n^{1/2}). \end{aligned}$$

From here, (21) follows from (10).

4.2 The Weighted Silhouette

We prove Theorem 3 and Proposition 1.

Proof of Theorem 3

We start with the uniform convergence of \((\varXi _k)\). For all \(x \in [0,1]\), \(|\varXi _k(x) - \varXi _{k-1}(x)|\) is distributed like the product of \(k+1\) independent random variables, each of which having the uniform distribution on [0, 1]. In particular, by the union bound and Markov’s inequality, for any \(m \ge 1\),

$$\begin{aligned} \mathbb {P} \left( \sup _{x \in [0,1]} |\varXi _k(x) - \varXi _{k-1}(x)| \ge t \right) \le 2^k \mathbb {P} \left( \prod _{i=1}^{k+1} U_i \ge t \right) \le \left( \frac{2}{m+1}\right) ^{k} t^{-m}. \end{aligned}$$

For \(k \ge 1\), let \({\mathscr {D}}_k = \{\ell 2^{-k}: \ell = 1, \ldots , 2^k - 1\}\). By construction, for \(k \ge 1\), the map \(x \rightarrow \varXi _k(x)\) is a right continuous step function. Further, it is continuous at x if and only if \(x \notin {\mathscr {D}}_k\). Next, for \(0< q < 1\),

$$\begin{aligned} \mathbf {E} \left[ \sup _{x \in [0,1]} |\varXi _k(x) - \varXi _{k-1}(x)| \right]&= \int _0^\infty \mathbb {P} \left( \sup _{x \in [0,1]} |\varXi _k(x) - \varXi _{k-1}(x)| \ge t \right) dt \\&\le q^k + \int _{q^k}^\infty \left( \frac{2}{m+1}\right) ^{k} t^{-m} dt \\&= q^k + \frac{1}{m-1}\left( \frac{2}{m+1}\right) ^{k} q^{-k(m-1)}. \end{aligned}$$

With \(m=2\) and \(q = \sqrt{2/3}\), the latter expression is bounded by \(2 q^k\). By Markov’s inequality, it follows that \(\sup _{m \ge n} \sup _{x \in [0,1]} |\varXi _m(x) - \varXi _{n}(x)| \rightarrow 0\) in probability as \(n \rightarrow \infty \). An application of the triangle inequality shows that \(\sup _{m, p \ge n} \sup _{x \in [0,1]} |\varXi _m(x) - \varXi _{p}(x)| \rightarrow 0\) in probability as \(n \rightarrow \infty \). By monotonicity, this convergence is almost sure. Thus, almost surely, \((\varXi _k)\) is uniformly Cauchy in the space of càdlàg functions endowed with the uniform topology. By completeness, \((\varXi _k)\) converges to a limit denoted by \(\varXi \) with càdlàg paths. Moreover, \(\varXi \) is continuous at \(x \notin {\mathscr {D}}\) where \(\mathscr {D} = \cup _{m \ge 1} {\mathscr {D}}_m\) since this is true for all \(\varXi _k\), \(k \ge 1\). For \(x \in {\mathscr {D}}\), let \(\varPhi (x)\) be the key of the node associated with \(x_1 \ldots x_{k-1}\) where \(k \ge 1\) is chosen minimal with \(x \in {\mathscr {D}}_k\). Then, \(\lim _{y \uparrow x} \varXi (x) = \varPhi (x) = \varXi (x).\) Thus, \(x \mapsto \varXi (x)\) is continuous. By the construction of the tree, it is clear that \(\varXi (x) < \varXi (y)\) for any \(x, y \in {\mathscr {D}}\) with \(x < y\). As \({\mathscr {D}}\) is dense in [0, 1], the process \(\varXi \) is strictly monotonically increasing. Obviously, \(\varXi (0) = 0\) and \(\varXi (1) = 1\); hence, \(\varXi \) is the distribution function of a probability measure on [0, 1].\(\square \)

We turn to the convergence of \({\mathscr {B}}_n(x)\). For any fixed \(x \in [0,1]\), display (3) implies that, as \(n \rightarrow \infty \), in probability, \(B_n(x) / \log n \rightarrow 1\). Thus, (16) follows from the convergence \(\varXi _k(x) \rightarrow \varXi (x)\). The convergence (16) is with respect to all moments since \(B_n(x) \le H_n\) and we have convergence of all moments in (2). By the theorem of dominated convergence, for any \(m \ge 1\), again using (2), we have

$$\begin{aligned} \int _0^1 \mathbf {E} \left[ \left| \frac{{{\mathscr {B}}}_n(x)}{\log n} - \varXi (x) \right| ^m \right] dx \rightarrow 0. \end{aligned}$$

This shows (17). To prove (18), note that, for any \(k \ge 1\), \(\sup _{x \in [0,1]} {{\mathscr {B}}}_n(x)\) is larger than the product of the height of the subtree rooted at the node \(w_k := 1\ldots 1\) on level k and \(\varXi _{k-1}({\mathbf {1}})\). Let \(\varepsilon > 0\). Fix k large enough such that \(\mathbb {P} \left( \varXi _{k-1}({\mathbf {1}})< 1-\varepsilon \right) < \varepsilon \). Conditional on its size, the subtree rooted at \(w_k\) is a random binary search tree. Since its size grows linearly in n as \(n \rightarrow \infty \), it follows from (2) that, for all n sufficiently large, its height exceeds \((c^*-\varepsilon ) \log n\) with probability at least \(1-\varepsilon \). For these values of n, we have \(\sup _{x \in [0,1]} {{\mathscr {B}}}_n(x) \ge (c^* - 6 \varepsilon ) \log n\) with probability at least \(1-2 \varepsilon \). As \(\varepsilon \) was chosen arbitrarily, this shows (18).

For the joint convergence of \(B_n(x)\) and \({\mathscr {B}}_n(x)\) for fixed \(x \in [0,1]\), we abbreviate \(B_n := B_n(x), {\mathscr {B}}_n := {\mathscr {B}}_n(x)\), \(\varXi _k := \varXi _k(x), \varXi = \varXi (x)\) and \(\bar{B}_n = (B_n - \log n)/\sqrt{\log n}\). Note that \(\varXi \) and \(B_n\) are not independent which causes the proof to be more technical. Denote by \(N_k\) the time when the node associated with \(x_1 \ldots x_k\) is inserted in the binary search tree. For any \(\varepsilon > 0\), we can choose \(k, L \ge 1\) such that, for all n sufficiently large,

$$\begin{aligned} \mathbb {P} \left( |\varXi _k - \varXi | \ge \varepsilon \right) + \mathbb {P} \left( N_k \ge L \right) + \mathbb {P} \left( \left| \frac{{\mathscr {B}}_n}{\log n} - \varXi \right| \ge \varepsilon \right) \le \varepsilon . \end{aligned}$$

Further, there exists \(\delta > 0\) such that \(\mathbb {P} \left( |\varXi _k - \varXi _{k-1}| \le \delta \right) \le \varepsilon \). Then, for \(r , y \in \mathbb {R}\) with \(\mathbb {P} \left( \varXi = y \right) = 0\), and n large enough,

$$\begin{aligned} \mathbb {P} \left( \bar{B}_n \le r, \frac{{\mathscr {B}}_n}{ \log n} \le y \right) \le 2 \varepsilon + \mathbb {P} \left( \bar{B}_n \le r, \varXi _k \le y + 2 \varepsilon , |\varXi _k - \varXi _{k-1}| \ge \delta , N_k < L \right) . \end{aligned}$$

Let \(\bar{x} = x_{k+1} x_{k+2} \ldots \), \((V_1, V_2, \ldots )\) be an independent copy of \((U_1, U_2, \ldots )\) and

$$\begin{aligned} \text {Bin}(n,p) := \sum _{i=1}^n \mathbf {1}_{ \{V_i \le p\} }, \quad n \ge 0, p \in [0,1]. \end{aligned}$$

Given \(\varXi _k, |\varXi _k - \varXi _{k-1}|, N_k\), on \(N_k < n\), \(\bar{B}_n\) is distributed like \(\bar{B}^*_{\text {Bin}(n - N_k, |\varXi _k - \varXi _{k-1}|)}(\bar{x}) + k / \sqrt{\log n}\) where \((B^*_n(\bar{x}))\) is distributed like \((B_n(\bar{x}))\) and independent from the remaining quantities. We deduce

$$\begin{aligned}&\mathbb {P} \left( \bar{B}_n \le r, \frac{{\mathscr {B}}_n}{ \log n} \le y \right) \\&\quad \le 2 \varepsilon + \mathbb {P} \left( \frac{k}{\sqrt{ \log n}} + \bar{B}^*_{\text {Bin}(n - L, \delta )}(\bar{x}) \le r, \varXi _k \le y + 2 \varepsilon , |\varXi _k - \varXi _{k-1}| \ge \delta , N_k < L \right) \\&\quad \le 3 \varepsilon + \mathbb {P} \left( \frac{k}{\sqrt{ \log n}} + \bar{B}^*_{\text {Bin}(n - L, \delta )}(\bar{x}) \le r \right) \mathbb {P} \left( \varXi \le y + 2 \varepsilon \right) . \end{aligned}$$

Using the asymptotic normality of \((\bar{B}_n^*(\bar{x}))\) (after rescaling) in (3), taking the limit superior as \(n \rightarrow \infty \) and then letting \(\varepsilon \) tend to zero, we obtain

$$\begin{aligned} \limsup _{n \rightarrow \infty } \mathbb {P} \left( \bar{B}_n \le r, \frac{{\mathscr {B}}_n}{\log n} \le y \right) \le \mathbb {P} \left( \mathscr {N}\le r \right) \mathbb {P} \left( \varXi \le y \right) . \end{aligned}$$

The proof of the converse direction establishing (19) is easier. It runs along the same lines upon using the trivial bounds \(|\varXi _k - \varXi _{k-1}| \le 1\) and \(N_k \ge 0\).

Proof of Proposition 1

We start with the characterization of the distribution of the process. For a deterministic sequence of pairwise different numbers \(u_1, u_2, \ldots \) on the unit interval, we define \(\xi _k(x)\) analogously to \(\varXi _k(x)\) in the infinite binary search tree constructed from this sequence. Here, we abbreviate \(\xi _k(x) = 0\) if the node \(x_1 \ldots x_k\) is not in the tree. Let \(n_m^-, m \ge 1,\) be the subsequence defined by the elements \(u_{n^-_m} < u_1\) and \(u_m^+, m \ge 1\), be the subsequence defined by the elements \(u_{n^+_m} > u_1\). At least one of these sequences is infinite. For \(m \ge 1\), let \(y_m^{-} = u_{n^{-}_m} / u_1\) and \(y_m^+ = (u_{n^+_m} - u_1) / (1-u_1)\). Next, define \(\xi ^{-}_k\) (\(\xi ^+_k\), respectively) analogously to \(\xi _k\) based on the sequence \((y^-_m)\) (\((y^+_m)\), respectively). By construction, for \(k \ge 1\),

$$\begin{aligned} \xi _k(x) = \mathbf {1}_{ [0,1/2) }(x) u_1 \xi _{k-1}^-(2x) + \mathbf {1}_{ [1/2,1] }(x) ((1-u_1) \xi _{k-1}^+(2x - 1) + u_1). \end{aligned}$$

Applying the construction to the sequence \(U_1, U_2, \ldots \) yields

$$\begin{aligned} \varXi _k(x) = \mathbf {1}_{ [0,1/2) }(x) U_1 \varXi _{k-1}^-(2x) + \mathbf {1}_{ [1/2,1] }(x) ((1-U_1) \varXi _{k-1}^+(2x - 1) + U_1). \end{aligned}$$

Almost surely, the random sequences \(y_m^{-}\) and \(y_m^+\) are both infinite and \((\varXi ^-_k), (\varXi ^+_k)\) are independent copies of \((\varXi _k)\). Further, both sequences are independent of \(U_1\). Hence, letting \(k \rightarrow \infty \) in the last display, we obtain (22) on an almost sure level. The characterization of \(\mathscr {L}(\varXi )\) by (22) follows from a standard contraction argument, and the argument on page 267 in [12] applies to our setting without any modifications.\(\square \)

We move on to the statements (i) – (vi) on the marginal distributions of the process. Here, we use notation that was introduced in the proof of Theorem 3. By continuity, it suffices to show (i) for \(x \in {\mathscr {D}}\). Let \(k \ge 1\). By symmetry, for \(1 \le i \le 2^k-1\), we have \(\mathbf {E} \left[ \varPhi (i 2^{-k}) \right] = i 2^{-k}\). Thus, the assertion follows for \(x \in {\mathscr {D}}\) since \(\varPhi (x) = \varXi (x)\). The symmetry statement (ii) is reminiscent of the fact that the uniform distribution on [0, 1] is symmetric around 1 / 2. More precisely, we apply the reflection argument from [1] which is at the core of the proof of the second assertion in (6). Let \(U_1^* = 1 - U_1, U_2^* = 1 - U_2, \ldots \) and define \(\varXi ^*\) analogously to \(\varXi \) in the binary search tree process relying on the sequence \(U_1^*, U_2^*, \ldots \) Then, \(\varXi ^*(t) +\varXi (1-t) = 1\) for all \(t \in [0,1]\) which proves (ii). With \(Y = \varXi (\xi )\), (22) yields

$$\begin{aligned} \mathscr {L}(Y) = \mathscr {L}(U Y + \mathbf {1}_{ A } (1-U)), \end{aligned}$$

where \(\mathbf {1}_{ A }, U, Y\) are independent and \(\mathbb {P} \left( A \right) = 1/2\). From [5], it follows that Y has the arcsine distribution, proving (iii). We move on to the statements about the distribution of \(\varXi (t)\). Let \(t \in (0,1/2)\). Since \(\varXi \) is strictly increasing, we have \(\varXi (2t) \in (0,1)\) almost surely. By (22), \(\mathscr {L}(\varXi (t)) = \mathscr {L}(U \varXi (2t))\) with conditions as in (22). Therefore, \(\mathscr {L}(\varXi (t))\) admits a density. By symmetry, the same is true for \(t \in (1/2,1)\). For \(t \in (0,1/2)\), by conditioning on the value of U, one finds the density

$$\begin{aligned} f_t(x) = \mathbf {E} \left[ \frac{\mathbf {1}_{ [x,1] }(\varXi (2t))}{\varXi (2t)} \right] , \quad x \in (0,1]. \end{aligned}$$
(27)

\(f_t(x)\) is monotonically decreasing and continuous on (0, 1] with \(f(1)=0\). For \(t \in (1/2,1)\), \(f_t(x) = f_{1-t}(1-x), x \in (0,1)\) is a density of \(\mathscr {L}(\varXi (t))\) by (ii). By (27), for \(t \in (0,1/2), x \in (0,1)\),

$$\begin{aligned} f_t(x) = \int _x^1 \frac{f_{2t}(y)}{y} dy, \quad \text {or} \quad x f_t'(x) = -f_{2t}(x). \end{aligned}$$
(28)

Upon setting \(f_0 = f_1 = 0\), the last identity also holds for \(t =0\) and \(t = 1/2\) since \(f_{1/2} = \mathbf {1}_{ [0,1] }\) is a density of \(\mathscr {L}(\varXi (1/2))\). Thus, for any \(t \in (0,1)\), \(f_t\) is smooth on (0, 1). Since the uniform distribution takes values arbitrarily close to one, it follows that, for all \(\delta > 0, t \in (0,1)\), we have \(\mathbb {P} \left( \varXi (t)> 1 - \delta \right) > 0\). Hence, for all \(t \in (0,1)\), the density \(f_t\) is strictly positive on (0, 1). Thus, for \(t \in (0, 1/2)\), \(f_t\) is strictly monotonically decreasing. Summarizing, we have shown (iv) and (v). For \(t \in (0,1/4]\), the assertion \(\alpha ^{(0)}_t = \infty \) in (vi) follows immediately from (28) since \(\alpha ^{(0)}_{2t} > 0\). Let \(1/4< t < 1/2\). Assume \(\alpha ^{(0)}_{1-2(1-2t)} < \infty \). Then, \(f_{2(1-2t)}(1) < \infty \). By (28), it follows that \(f'_{1-2t}(1)\) is finite and hence \(f'_{2t}(0)\) is finite. Thus, \(f_{2t}(y) / y\) is bounded in a neighbourhood of zero and \(\alpha ^{(0)}_t < \infty \). For \(t > 3/8\), we have \(1-2(1-2t) > 1/2\); thus, \(\alpha ^{(0)}_t < \infty \). Iterating this argument leads to \(\alpha _t^{(0)} < \infty \) for all \(1/3< t < 1/2\). In order to proceed further, note that, for \(t > 1/4\), there exists \(k \in \mathbb {N}\), such that, in probability, \(\varXi (t) \ge Z := U_1(U_2 + (1-U_2)\prod _{\ell = 1}^kU_{2+\ell }).\) Z admits a density \(f_Z\) given by

$$\begin{aligned} f_Z(x) = 1 + \int _x^1 r(y) dy - x r(x), \quad r(x) = \frac{1}{x^2} \int _0^x \mathbb {P} \left( \prod _{\ell = 1}^kU_{2+\ell } \le \frac{x-v}{1-v} \right) dv. \end{aligned}$$

Thus,

$$\begin{aligned} \lim _{x \downarrow 0} f_Z(x) = 1 + \int _0^1 r(y) dy < \infty . \end{aligned}$$

It follows that \(\alpha _t^{(0)} \le 1 + \int _0^1 r(x) dx < \infty \). Since \(\varXi \) is increasing, the function \(t \mapsto \alpha _t^{(0)}\) is decreasing. Thus, by monotonicity and continuity, it follows \(\alpha ^{(0)}_t \uparrow \infty \) as \(t \downarrow 1/4\). For \(t \le 1/4\), \(\alpha ^{(0)}_t = \infty \) follows immediately from (28) since \(\alpha ^{(0)}_{2t} < \infty \). For \(1/4< t < 1/2\), the remaining statements about \(\alpha _t^{(1)}\) are direct corollaries of the results for \(\alpha _t^{(0)}\) since \(\alpha ^{(1)}_t = \alpha ^{(0)}_{1 -2(1-2t)}\). This finishes the proof of (vi).

The curvature We make a concluding remark about the curvature of \(f_t, t \in (0,1/2)\). First, since \(x f^{''}_t(x) = - f_{2t}'(x) - f_t'(x)\), for \(0 < t \le 1/4\), the function \(f_t\) is convex. From (28) it is easy to deduce \(f_{1/3}(x) = 2(1-x)\). Since \(f_{1/3}'' = f_{1/2}'' = 0\), it is plausible to conjecture that \(f_t\) is convex for \(t \le 1/3\) and concave for \(1/3 \le t < 1/2\). Concavity at rational points with small denominator such as \(t = 3/8\) or \(t = 5/12\) can be verified by hand using (28).

4.3 Weighted Path Length and Wiener Index

In order to obtain mean and variance for the weighted path length and the weighted Wiener index, we use the reflection argument from the proof of Proposition 1 (ii). To this end, let \(\mathscr {P}_n^*\) and \(\mathscr {W}_n^*\) denote weighted path length and weighted Wiener index in the binary search tree built from the sequence \(U_1^* = 1-U_1, U_2^* = 1-U_2, \ldots \) Then, \(\mathscr {P}_n + \mathscr {P}_n^* = P_n + n\) and \(\mathscr {W}_n + \mathscr {W}_n^* = W_n + {n \atopwithdelims ()2}\) providing the claimed expansions for \(\mathbf {E} \left[ \mathscr {P}_n \right] \) and \(\mathbf {E} \left[ \mathscr {W}_n \right] \) upon recalling (7) and (8).

For a finite rooted labelled binary tree T, denote by p(T) its path length, by \(\mathbf {p}(T)\) its weighted path length, by w(T) its Wiener index and by \(\mathbf {w}(T)\) its weighted Wiener index. Let \(T_1, T_2\) be its left and right subtree and x the label of the root. Then, denoting by |T| the size of T, for \(|T| \ge 1\),

$$\begin{aligned} p(T)&= p(T_1) + p(T_2) + |T|-1, \end{aligned}$$
(29)
$$\begin{aligned} w(T)&= w(T_1) + w(T_2) + (|T_2| + 1) p(T_1) + (|T_1| + 1) p(T_2) + |T| + 2 |T_1| |T_2| -1. \end{aligned}$$
(30)

The first statement is obvious, the argument for the second can be found in [25]. For the weighted quantities, one obtains

$$\begin{aligned} \mathbf {p}(T)&= \mathbf {p}(T_1) + \mathbf {p}(T_2) + |T| x, \end{aligned}$$
(31)
$$\begin{aligned} \mathbf {w}(T)&= \mathbf {w}(T_1) + \mathbf {w}(T_2) + (|T_2| + 1) \mathbf {p}(T_1) + (|T_1| + 1) \mathbf {p}(T_2) + (|T| + |T_1| |T_2|)x. \end{aligned}$$
(32)

Again, the first assertion is easy to see and we only justify the second. The terms \(\mathbf {w}(T_1)\) and \(\mathbf {w}(T_2)\) account for weighted distances within the subtrees. The sum of all weighted distances between nodes in the left subtree and the root equals \(\mathbf {p}(T_1) + |T_1|x\). Replacing \(T_1\) by \(T_2\), we obtain the analogous sum in the right subtree. The sum of all distances between nodes in different subtrees equals \(|T_1| \mathbf {p}(T_2) + |T_2| \mathbf {p}(T_1) + |T_1| |T_2| x\). Finally, we need to add x for the weighted distance of the root to itself. Adding up the terms and simplifying leads to (32). For \(\alpha , \beta > 0\) let \(\alpha T + \beta \) be the tree obtained from T where each label y is replaced by \(\alpha y + \beta \). Obviously, \(p(T) = p(\alpha T +\beta )\) with the analogous identity for the Wiener index. For the weighted quantities, we have

$$\begin{aligned} \mathbf {p}(\alpha T + \beta )&= \alpha \mathbf {p}(T) + (p(T) + |T|) \beta , \end{aligned}$$
(33)
$$\begin{aligned} \mathbf {w}(\alpha T + \beta )&= \alpha \mathbf {w}(T) + (w(T) + |T|(|T| + 1)/2) \beta . \end{aligned}$$
(34)

Let T be the binary search tree of size n in the i.i.d. model. Then, given \(I_n := \text {rank}(U_1), U := U_1\), in distribution, the trees \(\frac{1}{U} T_1\) and \(\frac{1}{1-U} T_2 - \frac{U}{1-U}\) are independent binary search trees of size \(I_n - 1\) and \(n- I_n\), constructed from independent sequences of uniformly distributed random variables on [0, 1]. Thus, combining (29)–(34), for the vector \(Y_n = (\mathscr {W}_n, W_n,\mathscr {P}_n,P_n)^T\), we have

$$\begin{aligned} Y_n&\mathop {=}\limits ^{d} \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} U &{}\quad 0&{}\quad (n+1-I_n)U&{}\quad 0 \\ 0 &{} 1&{} 0&{} n+1-I_n\\ 0 &{} 0&{} U&{} 0\\ 0 &{} 0&{} 0&{} 1 \end{array}\right] Y_{I_n-1}\nonumber \\&\quad \, \,+ \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1-U &{}\quad U&{}\quad I_n(1-U)&{}\quad I_nU \\ 0 &{} 1&{} 0&{} I_n\\ 0 &{} 0&{} 1-U&{} U\\ 0 &{} 0&{} 0&{} 1 \end{array}\right] Y'_{n-I_n}\\&\qquad +\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} (2n + (n-I_n)(3 I_n + n - 2))U/2\\ n-1+2(I_n-1)(n-I_n)\\ (2n - I_n)U\\ n-1 \end{array}\right) , \end{aligned}$$

where \((Y_n'), (Y_n), (I_n, U)\) are independent and \((Y'_n)\) is distributed like \((Y_n)\). Here, \(\mathop {=}\limits ^{d}\) indicates that left- and right-hand side are identically distributed.

We consider the sequence \((Z_n)_{n\ge 0}\) defined by

$$\begin{aligned} Z_n:=\left( \frac{\mathscr {W}_n-{\mathbb {E}}[\mathscr {W}_n]}{n^2},\frac{W_n-{\mathbb {E}}[W_n]}{n^2},\frac{\mathscr {P}_n-{\mathbb {E}}[\mathscr {P}_n]}{n},\frac{P_n-{\mathbb {E}}[P_n]}{n}\right) ^{T}, \quad n \ge 1, \end{aligned}$$

and \(Z_0 = 0\). Let \(\alpha _n = \mathbf {E} \left[ \mathscr {W}_n \right] , \beta _n = \mathbf {E} \left[ W_n \right] , \gamma _n = \mathbf {E} \left[ \mathscr {P}_n \right] \) and \(\delta _n = \mathbf {E} \left[ P_n \right] \). Further, let

$$\begin{aligned} A_1^{(n)}&=\left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \left( \frac{I_n-1}{n}\right) ^2 U &{}\quad 0&{}\quad \left( 1-\frac{I_n-1}{n}\right) \frac{I_n-1}{n} U&{}\quad 0 \\ 0 &{} \left( \frac{I_n-1}{n}\right) ^2&{} 0&{} \left( 1-\frac{I_n-1}{n}\right) \frac{I_n-1}{n}\\ 0 &{} 0&{} \frac{I_n-1}{n} U &{} 0\\ 0 &{} 0&{} 0&{}\frac{I_n-1}{n} \end{array}\right] ,\\ A_2^{(n)}&=\left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \left( 1-\frac{I_n}{n}\right) ^2(1-U) &{}\quad \left( 1-\frac{I_n}{n}\right) ^2 U &{}\quad \frac{I_n}{n}\left( 1-\frac{I_n}{n}\right) (1-U)&{}\quad \frac{I_n}{n}\left( 1-\frac{I_n}{n}\right) U \\ 0 &{} \left( 1-\frac{I_n}{n}\right) ^2&{} 0&{} \frac{I_n}{n}\left( 1-\frac{I_n}{n}\right) \\ 0 &{} 0&{} \left( 1-\frac{I_n}{n}\right) (1-U)&{} \left( 1-\frac{I_n}{n}\right) U \\ 0 &{} 0&{} 0&{}1-\frac{I_n}{n} \end{array}\right] , \end{aligned}$$

and \(C^{(n)}=(C_1^{(n)}, C_2^{(n)}, C_3^{(n)}, C_4^{(n)})^T\) with

$$\begin{aligned} C_1^{(n)}&= \frac{U}{n^2}\alpha _{I_n-1}+\frac{1-U}{n^2}\alpha _{n-I_n}+\frac{U}{n^2}\beta _{n-I_n}+U\frac{(n+1-I_n)}{n^2}\gamma _{I_n-1}+(1-U)\frac{I_n}{n^2}\gamma _{n-I_n}\\&\quad \, \, +U\frac{I_n}{n^2}\delta _{n-I_n} + U \frac{2n + (n-I_n)(3 I_n + n - 2)}{2n^2}-\frac{1}{n^2}\alpha _n,\\ C_2^{(n)}&=\frac{1}{n^2}\beta _{I_n-1}+\frac{1}{n^2}\beta _{n-I_n}+\left( 1-\frac{I_n-1}{n}\right) \frac{1}{n}\delta _{I_n-1}+\frac{I_n}{n^2}\delta _{n-I_n}\\&\quad \, \, +\frac{n-1+2(n-1)(n-I_n)}{n^2}-\frac{1}{n^2}\beta _n,\\ C_3^{(n)}&=\frac{U}{n}\gamma _{I_n-1}+\frac{1-U}{n}\gamma _{n-I_n}+\frac{U}{n}\delta _{n-I_n}+\left( 2 - \frac{I_n}{n} \right) U-\frac{1}{n}\gamma _n,\\ C_4^{(n)}&=\frac{1}{n}\delta _{I_n-1}+\frac{1}{n}\delta _{n-I_n}+ 1 - \frac{1}{n}-\frac{1}{n}\delta _n. \end{aligned}$$

Then, from the recurrence for \((Y_n)\), it follows

$$\begin{aligned} Z_n\mathop {=}\limits ^{d}A_1^{(n)}Z_{I_n-1}+A_2^{(n)}Z'_{n-I_n}+C^{(n)}, \quad n \ge 1, \end{aligned}$$

where \((Z_n), (Z'_n), (I_n, U)\) are independent and \((Z'_n)\) is distributed like \((Z_n)\). We prove convergence of \(Z_n\) in distribution by an application of the contraction method. To this end, note that \(I_n/n \rightarrow U\) almost surely by the strong law of large numbers. Thus, with convergence in \(L_2\) and almost surely,

$$\begin{aligned} A_1^{(n)}&\rightarrow A_1:=\left[ \begin{array}{cccc} U^3 &{}\quad 0&{}\quad U^2(1-U)&{}\quad 0 \\ 0 &{} U^2&{} 0&{} U(1-U)\\ 0 &{} 0&{} U^2&{} 0\\ 0 &{} 0&{} 0&{}U \end{array}\right] ,\\ A_2^{(n)}&\rightarrow A_2:=\left[ \begin{array}{cccc} (1-U)^3 &{}\quad U(1-U)^2&{}\quad U(1-U)^2&{}\quad U^2(1-U)\\ 0 &{} (1-U)^2&{} 0&{} U(1-U)\\ 0 &{} 0&{} (1-U)^2&{} U(1-U)\\ 0 &{} 0&{} 0&{} 1-U \end{array}\right] ,\end{aligned}$$

and

$$\begin{aligned} C^{(n)} \rightarrow C :=\left( \begin{array}{cccc}U^2\log {U}+(1-U^2)\log {(1-U)}+U(-14U^2 + 9U + 5)/4 \\ 2U\log {U}+2(1-U)\log (1-U)+6U(1-U)\\ U^2\ln {U}+(1-U^2)\ln (1-U)+U \\ 2U\ln {U}+2(1-U)\ln {(1-U)} + 1\end{array}\right) . \end{aligned}$$

For a quadratic matrix A, denote by \(\Vert A\Vert _{\text {op}}\) its spectral radius. By calculating the eigenvalues of \(A_1 A_1^T\) and \(A_2 A_2^T\), one checks that \(\Vert A_1\Vert _{\text {op}} = U\) and \(\Vert A_2\Vert _{\text {op}} = 1-U\). Thus,

$$\begin{aligned} \mathbf {E} \left[ \Vert A_1 A_1^T\Vert _{\text {op}} \right] +\mathbf {E} \left[ \Vert A_2 A_2^T\Vert _{\text {op}} \right] \le \mathbf {E} \left[ \Vert A_1\Vert ^2_{\text {op}} \right] +\mathbf {E} \left[ \Vert A_2\Vert ^2_{\text {op}} \right] <1. \end{aligned}$$

Moreover, we have \(\mathbb {P} \left( I_n \in \{1, \ldots , \ell \} \cup \{n\} \right) \rightarrow 0\) for all fixed \(\ell \). Thus, by Theorem 4.1 in [24], in distribution and with convergence of the first two moments, we have \(Z_n \rightarrow (\mathscr {W},W,\mathscr {P},P)\) where \({\mathscr {L}}(\mathscr {W},W,\mathscr {P},P)\) is the unique fixed-point of the map:

$$\begin{aligned} T : {\mathscr {M}}_2^4(0) \longrightarrow {\mathscr {M}}_2^4(0), \quad T(\mu ) = {\mathscr {L}} \left( A_1 Z +A_2 Z' + C \right) , \end{aligned}$$
(35)

with \(A_1, A_2, C\) defined above, where \(Z, Z', U\) are independent and \({\mathscr {L}}(Z)={\mathscr {L}}(Z')=\mu \). Here, \({\mathscr {M}}_2^4(0)\) denotes the set of probability measures on \(\mathbb {R}^4\) with finite absolute second moment and zero mean. Variances and covariances can be computed successively using the fixed-point equation, e.g. in the following order: \(\mathbf {E} \left[ P^2 \right] , \mathbf {E} \left[ P W \right] \), \(\mathbf {E} \left[ W^2 \right] , \mathbf {E} \left[ P \mathscr {P} \right] ,\) \(\mathbf {E} \left[ \mathscr {P}^2 \right] , \mathbf {E} \left[ \mathscr {P}W \right] ,\) \(\mathbf {E} \left[ P \mathscr {W} \right] , \mathbf {E} \left[ W \mathscr {W} \right] \), \(\mathbf {E} \left[ \mathscr {P}\mathscr {W} \right] , \mathbf {E} \left[ \mathscr {W}^2 \right] \). Additionally to the variances given in the theorem, one obtains

$$\begin{aligned} \text {Cov}(P_n, \mathscr {P}_n)&\sim \frac{21 - 2 \pi ^2}{ 6} n^2, \quad \text {Cov}(P_n, W_n) \sim \frac{20 -2 \pi ^2}{3} n^3, \end{aligned}$$
(36)
$$\begin{aligned} \text {Cov}(\mathscr {P}_n, W_n)&\sim \frac{10 - \phantom {2} \pi ^2}{3} n^3, \quad \text {Cov}(P_n, \mathscr {W}_n) \sim \frac{10 - \phantom {2} \pi ^2}{3} n^3, \end{aligned}$$
(37)
$$\begin{aligned} \text {Cov}(W_n, \mathscr {W}_n)&\sim \frac{10 - \phantom {2} \pi ^2}{3} n^4, \quad \text {Cov}(\mathscr {P}_n, \mathscr {W}_n) \sim \frac{481 -48 \pi ^2}{288} n^3. \end{aligned}$$
(38)