1 Motivation

The evolution of a population involves various forces. Kingman [14] considered the equilibrium of a population as existing because of a balance between two factors, other phenomena causing only perturbations. The pair of factors he chose was mutation and selection. The most famous model for the evolution of one-locus haploid population of infinite size and discrete generations, proposed by Kingman [14], is as follows:

Let the fitness value of any individual take values in [0, 1]. Higher fitness values represent higher productivities. Let \((P_n)=(P_n)_{n\ge 0}\) be a sequence of probability measures on [0, 1],  and denote the fitness distribution of the population at generation n. Let \(b\in [0,1)\) be a mutation probability. Let Q be a probability measure on [0, 1] serving as mutant fitness distribution. Then \((P_n)\) is constructed by the following iteration:

$$\begin{aligned} P_{n}(dx)=(1-b)\frac{x P_{n-1}(dx)}{\int y P_{n-1}(dy)}+bQ(dx), \quad n\ge 1. \end{aligned}$$
(1)

Biologically it says that a proportion b of the population are mutated with fitness values sampled from Q and the rest will undergo the selection via a size-biased transformation. Kingman used the term “House of Cards” for the fact that the fitness value of a mutant is independent of that before mutation, as the mutation destroys the biochemical “house of cards” built up by evolution.

House-of-Cards models, which includes Kingman’s model, belong to a larger class of models on the balance of mutation and selection. Variations and generalisations of Kingman’s model have been proposed and studied for different biological purposes, see for instance Bürger [4,5,6,7], Steinsaltz et al. [17], Evans et al. [12] and Yuan [18]. We refer to [19] for a more detailed literature review.

But to my best knowledge, no random generalisation has been developed except in my previous paper [19], in which we assume that the mutation probabilities form an i.i.d. sequence. The randomness of the mutation probabilities reflects the influence of a stable random environment on the mutation mechanism. The fitness distributions have been shown to converge weakly to a globally stable equilibrium distribution for any initial fitness distribution. When selection is more favoured than mutation, a condensation may occur, which means that almost surely a positive proportion of the population travels to and condensates on the largest fitness value. We have obtained a criterion of condensation which relies on the equilibrium whose explicit expression is however unknown. So we do not know how the equilibrium looks like and whether condensation occurs or not in concrete cases.

As a continuation for [19], this paper aims to solve the above problems based on the discovery of a matrix representation of the random model which yields an explicit expression for the equilibrium. The matrix representation also allows to examine the effects of different designs of randomness by comparing the moments and condensation sizes of the equilibriums in several models.

2 Models

This section is mainly a summarisation of Sect. 2 in [19], in addition to the introduction of a new random model where all mutation probabilities are equal but random.

2.1 Two Deterministic Models

Let \(M_1\) be the space of probability measures on [0, 1] endowed with the topology of weak convergence. Let \((b_n)=(b_n)_{n\ge 1}\) be a sequence of numbers in [0, 1), and \(P_0, Q\in M_1\). Kingman’s model with time-varying mutation probabilities or simply the general model has parameters \((b_n), Q, P_0\). In this model, \((P_n)=(P_n)_{n\ge 0}\) is a (forward) sequence of probability measures in \(M_1\) generated by

$$\begin{aligned} P_{n}(dx)=(1-b_{n})\frac{x P_{n-1}(dx)}{\int y P_{n-1}(dy)}+b_{n}Q(dx), \quad n\ge 1, \end{aligned}$$
(2)

where \(\int \) denotes \(\int _0^1 .\) We introduce a function \(S:M_1\mapsto [0,1]\) such that

$$\begin{aligned} S_u:=\sup \{x:u([x,1])>0\},\quad \forall u\in M_1. \end{aligned}$$

Then \(S_u\) is interpreted as the largest fitness value of a population of distribution u. Let \(h:=S_{P_0}\) and assume that \(h\ge S_Q.\) This assumption is natural because in any case we have \(S_{P_1}\ge S_Q.\)

We are interested in the convergence of \((P_n)\) to a possible equilibrium, which is however not guaranteed without putting appropriate conditions on \((b_n)\). To avoid triviality, we do not consider \(Q=\delta _0\), the dirac measure on 0.

Kingman’s model is simply the model when \(b_n=b\) for any n with the parameter \(b\in [0,1)\). We say a sequence of probability measures \((u_n)\) converges in total variation to u if the total variation \(\Vert u_n-u\Vert \) converges to zero. It was shown by Kingman [14] that \((P_n)\) converges to a probability measure, that we denote by \(\mathcal {K}\), which depends only on bQ and h but not on \(P_0.\)

Theorem 1

(Kingman’s theorem, [14]) If \(\int \frac{Q(dx)}{1-x/h}\ge b^{-1},\) then \((P_n)\) converges in total variation to

$$\begin{aligned} \mathcal {K}(dx)=\frac{b \theta _bQ(dx)}{\theta _b-(1-b)x}, \end{aligned}$$

where \(\theta _b\), as a function of b, is the unique solution of

$$\begin{aligned} \int \frac{b \theta _bQ(dx)}{\theta _b-(1-b)x}=1. \end{aligned}$$
(3)

If \(\int \frac{Q(dx)}{1-x/h}< b^{-1}\), then \((P_n)\) converges weakly to

$$\begin{aligned} \mathcal {K}(dx)=\frac{b Q(dx)}{1-x/h}+\Big (1-\int \frac{b Q(dy)}{1-y/h}\Big )\delta _{h}(dx). \end{aligned}$$

We say there is a condensation on h in Kingman’s model if \(Q(h)=Q(\{h\})=0\) but \(\mathcal {K}(h)>0\), which corresponds to the second case above. We call \(\mathcal {K}(h)\) the condensate size on h in Kingman’s model if \(Q(h)=0\). The terminology is due to the fact that if we let additionally \(P_0(h)=0,\) then any \(P_n\) has no mass on the extreme point h; however asymptotically a certain amount of mass \(\mathcal {K}(h)\) will travel to and condensate on h.

2.2 Two Random Models

We recall the notation of weak convergence for random probability measures. Let \((\mu _n)\) be random probability measures supported on [0, 1]. The sequence converges weakly to a limit \(\mu \) if and only if for any continuous function f on [0, 1] we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {E}\left[ f(x)\mu _n(dx)\right] =\mathbb {E}\left[ f(x)\mu (dx)\right] . \end{aligned}$$

Next we introduce two random models which generalise Kingman’s model. Let \(\beta \in [0,1)\) be a random variable. Let \((\beta _n)\) be a sequence of i.i.d. random variables sampled from the distribution of \(\beta .\) If \(b_n=\beta _n\) for any n we call it Kingman’s model with random mutation probabilities or simply the first random model. It has been proved in [19] that \((P_n)\) converges weakly to a globally stable equilibrium, that we denote by \(\mathcal {I}\) whose distribution depends on \(\beta , Q, h\) but not on \(P_0\).

For comparison we introduce another random model. If \(b_n=\beta \) for any n, we call it Kingman’s model with the same random mutation probability or the second random model. Conditionally on the value of \(\beta \), it becomes Kingman’s model. So we can think of this model as a compound version of Kingman’s model, with b replaced by \(\beta .\) We denote the limit of \((P_n)\) by \(\mathcal {A}\) which is a compound version of \(\mathcal {K}\).

In this paper, we continue to study the equilibrium and the condensation phenomenon in the first random model. By Corollary 4 in [19], if \(Q(h)=0\), then \(\mathcal {I}(h)>0\) a.s. or \(\mathcal {I}(h)=0\) a.s.. We say there is a condensation on h in the first random model if \(Q(h)=0\) but \(\mathcal {I}(h)>0\) a.s.. We call \(\mathcal {I}(h)\) the condensate size on h if \(Q(h)=0\). A condensation criterion, which relies on a function of \(\beta \) and \(\mathcal {I}\), was established in [19]. As the equilibrium has no explicit expression, the condensation criterion cannot be used in concrete cases. This paper aims to solve these problems based on a matrix representation of the general model which can be inherited to the first random model. The objectives include an explicit expression of \(\mathcal {I}\), and finer properties of \(\mathcal {I}\) on the moments and condensation. The comparisons of Kingman’s model and the two random models will be performed and to this purpose we assume additionally that

$$\begin{aligned} \mathbb {E}[\beta _n]=\mathbb {E}[\beta ]=b\in (0,1),\quad \forall \,n\ge 1\,\,. \end{aligned}$$

The case with \(b=0\) is excluded for triviality.

3 Notations and Results

3.1 Preliminary Results

In this section, we again recall some necessary results from [19]. We introduce

$$\begin{aligned} Q^k(dx):=\frac{ x^kQ(dx)}{\int y^kQ(dy)},\quad m_k:=\int x^kQ(dx), \quad \forall \, k\ge 0. \end{aligned}$$

We introduce the notion of invariant measure. A random measure \(\nu \in M_1\) is invariant, if it satisfies

$$\begin{aligned} \nu (dx){\mathop {=}\limits ^{d}}(1-\beta )\frac{x \nu (dx)}{\int _0^1 y \nu (dy)}+\beta Q(dx) \end{aligned}$$

with \(\beta \) independent of \(\nu \). Note that \(\mathcal {I}\), the limit of \((P_n)\) in the first random model, is an invariant measure.

In the general model a forward sequence \((P_n)\) does not necessarily converge. But the convergence may hold if we investigate the model in a backward way. A finite backward sequence \(( P_j^n)=( P_j^n)_{0\le j\le n}\) has parameters \(n, (b_j)_{1\le j\le n},Q, P_n^n, h\) with \(h=S_{P_n^n}\) and satisfies

$$\begin{aligned} P_j^n(dx)=(1-b_{j+1})\frac{x P_{j+1}^n(dx)}{\int y P_{j+1}^n(dy)}+b_{j+1}Q(dx), \quad 0\le j\le n-1. \end{aligned}$$
(4)

Consider a particular case with \(P_n^n=\delta _h\). Then \(P_j^n\) converges in total variation to a limit, denoted by \(\mathcal {G}_j=\mathcal {G}_{j,h}\) (and \(\mathcal {G}=\mathcal {G}_0, \mathcal {G}_Q=\mathcal {G}_{0,S_Q}\)), as n goes to infinity with j fixed, such that

$$\begin{aligned} \mathcal {G}_{j-1}(dx)=(1-b_{j})\frac{x \mathcal {G}_j(dx)}{\int y \mathcal {G}_j(dy)}+b_{j}Q(dx), \quad j\ge 1 \end{aligned}$$
(5)

where \(\mathcal {G}:[0,1)^\infty \rightarrow M_1\) is a measurable function, with \(\mathcal {G}_j=\mathcal {G}(b_{j+1},b_{j+2,\ldots })\) which is supported on \([0,S_Q]\cup \{h\}\) for any j. Moreover, (5) can be further developed

$$\begin{aligned} \mathcal {G}_0(dx){=}G_0\delta _h(dx)+\sum _{j=0}^{\infty }\prod _{l=1}^{j}\frac{(1-b_l)}{\int y\mathcal {G}_l(dy)}b_{j+1}m_jQ^j(dx). \end{aligned}$$
(6)

where \(G_0=G_{0,h}=1-\sum _{j=0}^{\infty }\prod _{l=1}^{j}\frac{(1-b_l)}{\int y\mathcal {G}_l(dy)}b_{j+1}m_j.\) Then \(\mathcal {G}_0\) can be considered as a convex combination of probability measures \(\{\delta _h, Q,Q^1,Q^2,\ldots \}\). We introduce also \(G_j=G_{j,h}\) for \(\mathcal {G}_{j,h}\) for any j and \(G=G_0, G_Q=G_{0,S_Q}.\)

The above results hold regardless of the values of \((b_n)\). So they hold also in the other three models. In particular, we replace the symbol \(\mathcal {G}, G\) by \(\mathcal {I}, I\) in the first random model (i.e., the terms involving \(\mathcal {G}\), which are \( \mathcal {G}, \mathcal {G}_Q, \mathcal {G}_j, \mathcal {G}_{j,h}, \mathcal {G}_{j, S_Q}\), are replaced by \(\mathcal {I}, \mathcal {I}_Q, \mathcal {I}_j, \mathcal {I}_{j,h}, \mathcal {I}_{j, S_Q}\). The change from G to I is done in the same way. The same rule applies to the other two models), by \(\mathcal {A}, A\) in the second random model and by \(\mathcal {K}, K\) in Kingman’s model.

For the first random model, \((\mathcal {I}_j)\) is stationary ergodic and \(\mathcal {I}\) is the weak limit of \((P_n)\). Moreover \(\mathbb {E}\left[ \ln \frac{(1-\beta )}{\int y\mathcal {I}_Q(dy)}\right] \in [-\infty ,-\ln \int yQ(dy)]\) is well defined, whose value does not depend on the joint law of \((\beta , \mathcal {I})\). This term is the key quantity in the condensation criterion. Note that we neither have an explicit expression of \(\mathcal {I}_Q\) nor an estimation of \(\mathbb {E}\left[ \ln \frac{(1-\beta )}{\int y\mathcal {I}_Q(dy)}\right] .\)

Theorem 2

(Condensation criterion, Theorem 3 in [19])

  1. 1.

    If \(h=S_Q\), then there is no condensation on \(S_Q\) if

    $$\begin{aligned} \mathbb {E}\left[ \ln \frac{S_Q(1-\beta )}{\int y\mathcal {I}_Q(dy)}\right] <0. \end{aligned}$$
    (7)
  2. 2.

    If \(h>S_Q\), then there is no condensation on h if and only if

    $$\begin{aligned} \mathbb {E}\left[ \ln \frac{h(1-\beta )}{\int y\mathcal {I}_Q(dy)}\right] \le 0. \end{aligned}$$
    (8)

3.2 Notations on Matrices

The most important tool in this paper is the matrix representation in the general model. We need to firstly introduce some notations and functions related to matrix. One can skip this part at first reading.

(1). Define

$$\begin{aligned} \gamma _j=\frac{1-b_j}{b_j},\quad \gamma =\frac{1-b}{b},\quad \varGamma _j=\frac{1-\beta _j}{\beta _j},\quad \varGamma =\frac{1-\beta }{\beta } \end{aligned}$$

where the 4 terms all belong to \((0,\infty ].\) For any \(\, 1\le j\le n\le \infty \) (except \(j=n=\infty \)), define

$$\begin{aligned} W_x^{j,n}:=\left( \begin{array}{ccccc}x&{}x^2&{}x^3&{}\cdots &{}x^{n-j+2}\\ -\gamma _j&{}m_1&{}m_2&{}\cdots &{}m_{n-j+1}\\ 0&{}-\gamma _{j+1}&{}m_1&{}\cdots &{}\vdots \\ 0&{}0&{}\ddots &{}\ddots &{}\vdots \\ 0&{}0&{}\cdots &{}-\gamma _{n}&{}m_1\end{array}\right) , \end{aligned}$$
(9)

and

$$\begin{aligned} W^{j,n}:=\int W_x^{j,n}Q(dx)=\left( \begin{array}{ccccc}m_1&{}m_2&{}m_3&{}\cdots &{}m_{n-j+2}\\ -\gamma _j&{}m_1&{}m_2&{}\cdots &{}m_{n-j+1}\\ 0&{}-\gamma _{j+1}&{}m_1&{}\cdots &{}\vdots \\ 0&{}0&{}\ddots &{}\ddots &{}\vdots \\ 0&{}0&{}\cdots &{}-\gamma _{n}&{}m_1\end{array}\right) . \end{aligned}$$
(10)

Introduce

$$\begin{aligned} W_x^n=W_x^{1,n};\,\,\,\, \,\,W_x=W_x^{1,\infty };\,\,\,\,\, \,W_x^{n+1,n}=(x);\,\, \,\,\,\,W_x^{m,n}=(1),\forall m>n+1 \end{aligned}$$

and

$$\begin{aligned} W^n=W^{1,n}; \,\,\,\,\,\, W=W^{1,\infty }; \,\,\,\,\,\, W^{n+1,n}=(m_1); \,\,\,\,\,\, W^{m,n}=(1),\forall m>n+1. \end{aligned}$$

(2). For a matrix M of size \(m\times n\), let \(r_i(M)\) be the ith row and \(c_j(M)\) be the jth column, for \(1\le i\le m, 1\le j\le n\). If the matrix is like

$$\begin{aligned} M=\left( \begin{array}{ccccc}m_{a_1}&{}m_{a_2}&{}\cdots &{} m_{a_{n-1}}&{}m_{a_n}\\ \cdot &{}\cdot &{}\cdots &{}\cdot &{}m_{a_{n+1}}\\ \vdots &{}\vdots &{}\ddots &{}\vdots &{}\vdots \\ \cdot &{}\cdot &{}\cdots &{}\cdot &{}m_{a_{n+m}} \end{array}\right) , \end{aligned}$$

define, for any \(k\ge 0\)

$$\begin{aligned} U_k^rM:=\left( \begin{array}{ccccc}m_{k+a_1}&{}m_{k+a_2}&{}\cdots &{} m_{k+a_{n-1}}&{}m_{k+a_n}\\ \cdot &{}\cdot &{}\cdots &{}\cdot &{}m_{a_{n+1}}\\ \vdots &{}\vdots &{}\ddots &{}\vdots &{}\vdots \\ \cdot &{}\cdot &{}\cdots &{}\cdot &{}m_{a_{n+m}} \end{array}\right) . \end{aligned}$$

Here \(U_k^r\) increases the indices of the first row by k, with r referring to “row”, and U to “upgrade”. Similarly define

$$\begin{aligned} U_k^cM:=\left( \begin{array}{ccccc}m_{a_1}&{}m_{a_2}&{}\cdots &{} m_{a_{n-1}}&{}m_{k+a_n}\\ \cdot &{}\cdot &{}\cdots &{}\cdot &{}m_{k+a_{n+1}}\\ \vdots &{}\vdots &{}\ddots &{}\vdots &{}\vdots \\ \cdot &{}\cdot &{}\cdots &{}\cdot &{}m_{k+a_{n+m}} \end{array}\right) \end{aligned}$$

which increases the indices of the last column by k, with c referring to “column”. In particular we write

$$\begin{aligned} U^r=U_1^r, \quad U^c=U_1^c. \end{aligned}$$

(3). Let \(|\cdot |\) denote the determinant operator for square matrices. It is easy to see that, if none of \(\gamma _j, \gamma _{j+1}, \ldots , \gamma _n\) is equal to infinity,

$$\begin{aligned} |U_k^rW^{j,n}|>0,\,\, |U_k^cW^{j,n}|>0, \quad \forall \, k\ge 0, 1\le j\le n+1. \end{aligned}$$

Define

$$\begin{aligned} L_{j,n}:=\frac{|W^{j+1,n}|}{|W^{j,n}|},\,\,\,\, R_{j,k}^n:=\frac{|U_k^rW^{j,n}|}{|W^{j,n}|},\,\,\,\,R_{j}^n:=R_{j,1}^n, \quad \forall \,1\le j\le n,\, k\ge 1. \end{aligned}$$
(11)

Specifically, let \(L_{n+1,n}=\frac{1}{m_1}, R^n_{n+1,k}=\frac{m_{k+1}}{m_1}\). In the above definition, if one or some of \(\gamma _j, \gamma _{j+1}, \ldots , \gamma _n\) are infinite, we consider \(L_{j,n},R_{j,k}^n\) as obtained by letting the concerned variables go to infinity. As a convention, we will not mention again the issue of some \(\gamma _j\)’s being infinite, when the function can be defined at infinity by limit.

Notice that expanding \(W^{j,n}\) along the first column, we have

$$\begin{aligned} L_{j,n}=\frac{|W^{j+1,n}|}{|W^{j,n}|}=\frac{|W^{j+1,n}|}{m_1|W^{j+1,n} |+\gamma _j |U^rW^{j+1,n}|}=\frac{1}{m_1+\gamma _jR^n_{j+1}}. \end{aligned}$$
(12)

If \(\gamma _j=\infty \), let

$$\begin{aligned} L_{j,n}=0,\,\, \gamma _jL_{j,n}=\frac{1}{R_{j+1}^n}. \end{aligned}$$

Lemma 1

In the general model, \(R_{j,k}^n\) increases strictly in n to a limit that we denote by \(R_{j,k}\) (and \(R_j=R_{j,1}\)) which satisfies

$$\begin{aligned} R_{j,k}=\frac{m_{k+1}+\gamma _iR_{j+1,k+1}}{m_1+\gamma _iR_{j+1}}. \end{aligned}$$
(13)

And \(\gamma _j L_{j,n}\) decreases strictly in n to a limit that we denote by \(\gamma _j L_{j}\) which satisfies

$$\begin{aligned} \gamma _jL_{j}=\left\{ \begin{array}{lr} {1/{R_{j+1}}}, &{} \text{ if } \gamma _j=\infty ;\\ &{} \\ \gamma _j/({m_1+\gamma _jR_{j+1}}), &{} \text{ if } \gamma _j<\infty . \end{array} \right. \end{aligned}$$
(14)

Moreover

$$\begin{aligned} \frac{\gamma _j}{m_1+\gamma _j}< \gamma _j L_j< \frac{\gamma _j}{m_1(1+ \gamma _j)}, \quad m_1<R_{j+1}<1. \end{aligned}$$
(15)

3.3 Main Results

(1). Matrix Representation

We set a convention that for a term, say \(\alpha _j\), in the general model, we use \({\widetilde{\alpha }}_j\) to denote the corresponding term in the first random model and \({\widehat{\alpha }}_j\) in the second random model, \({\overline{\alpha }}_j\) in Kingman’s model. If the corresponding term does not depend on the index j, we just omit the index.

Consider a finite backward sequence \((P_j^n)\) in the general model:

$$\begin{aligned} P_n^n=Q, \quad P_j^n(dx)=(1-b_{j+1})\frac{xP_{j+1}^n(dx)}{\int yP_{j+1}^n(dy)}+b_{j+1}Q(dx), \quad \,0\le j\le n-1. \end{aligned}$$
(16)

The previous sequence used in Sect. 3.1 starts with \(P_n^n=\delta _h\) and this one starts with \(P_n^n=Q\). The advantage of this change is that the latter enjoys a matrix representation, which is the most important tool in this paper.

Lemma 2

Consider \((P_j^n)\) in (16). For any \(0\le j\le n\),

$$\begin{aligned} \frac{xP_{j}^n(dx)}{\int yP_j^n(dy)}=\frac{|W_x^{j+1,n}|}{|W^{j+1,n}|}Q(dx), \end{aligned}$$
(17)

and

$$\begin{aligned} P_j^n(dx)=(1-b_{j+1})\frac{|W_x^{j+2,n}|}{|W^{j+2,n}|}Q(dx) +b_{j+1}Q(dx). \end{aligned}$$
(18)

Letting n go to infinity, we obtain the following.

Theorem 3

For j fixed and n tending to infinity, \(P_j^n\) converges weakly to a limit, denoted by \(\mathcal {H}_j\). If we denote \(\mathcal {H}=\mathcal {H}_0\), then \(\mathcal {H}:[0,1)^\infty \rightarrow M_1\) is a measurable function such that

$$\begin{aligned} \mathcal {H}_j=\mathcal {H}(b_{j+1},b_{j+2},\ldots ), \end{aligned}$$
(19)

and

$$\begin{aligned} \mathcal {H}_{j}(dx)=(1-b_{j+1})\frac{x \mathcal {H}_{j+1}(dx)}{\int y \mathcal {H}_{j+1}(dy)}+b_{j+1}Q(dx). \end{aligned}$$
(20)

Moreover

$$\begin{aligned} \frac{1-b_{j+1}}{\int y \mathcal {H}_{j}(dy)}=\gamma _{j+1}L_{j+1}. \end{aligned}$$
(21)

Note that \((\mathcal {H}_j)\) is the limit of \((P_j^n)\) with \(P_n^n=Q\), and \((\mathcal {G}_j)\) is the limit of \((P_j^n)\) with \(P_n^n=\delta _h\). When \(h=S_Q,\) it remains open whether \(\mathcal {H}=\mathcal {G}_Q\) or not. But the equality holds in the first random model.

Corollary 1

It holds that

$$\begin{aligned} (\mathcal {I}_{j,S_Q}){\mathop {=}\limits ^{d}}\left( {\widetilde{\mathcal {H}}}_j\right) . \end{aligned}$$

(2). Condensation Criterion

A remarkable application of the matrix representation is that the condensation criterion in Theorem 2 can be written into a simpler and tractable form using matrices.

Corollary 2

(Condensation criterion)

  1. 1.

    If \(h=S_Q\), then there is no condensation on \(\{S_Q\}\) if

    $$\begin{aligned} \mathbb {E}\left[ \ln S_Q\varGamma _{1}{\widetilde{L}}_{1}\right] <0. \end{aligned}$$
    (22)
  2. 2.

    If \(h>S_Q\), then there is no condensation on \(\{S_Q\}\) if and only if

    $$\begin{aligned} \mathbb {E}\left[ \ln h\varGamma _{1}{\widetilde{L}}_{1}\right] \le 0.\end{aligned}$$
    (23)

Note that the key quantity \(\mathbb {E}\left[ \ln \frac{(1-\beta )}{\int y\mathcal {I}_Q(dy)}\right] \) in Theorem 2 is now rewritten as \(\mathbb {E}\left[ \ln \varGamma _{1}{\widetilde{L}}_{1}\right] \). An estimation of it is highly necessary to make the criterion applicable. To achieve this, we introduce the second important tool of this paper in the following lemma, which is interesting by itself.

Lemma 3

Let \(f(x_1,\ldots ,x_n)\) be a \(C^2\) bounded real function with \(x_i\in \mathbb {R}, 1\le i\le n\). For \(1\le i, j\le n\), let \(f_{x_i}\) be the first-order partial derivative of f with respect to \(x_i\), and \(f_{x_ix_j}\) the second-order partial derivative with respect to \(x_i,x_j.\) Assume that \(\sum _{1\le i\ne j \le n}f_{x_ix_j}\le 0\). Let \((\xi _1,\ldots ,\xi _n)\) be n exchangeable random variables in \(\mathbb {R}\). Then

$$\begin{aligned} \mathbb {E}[f(\xi _1,\ldots \xi _n)]\ge \mathbb {E}[f(\xi _1,\ldots ,\xi _1)]. \end{aligned}$$

The estimation of \(\mathbb {E}[\ln \varGamma _1{\widetilde{L}}_1]\) is given as follows.

Theorem 4

We have

$$\begin{aligned} \mathbb {E}[\ln \varGamma {{\widehat{L}}}]\le \mathbb {E}[\ln \varGamma _1{\widetilde{L}}_1]\le \ln \gamma {{\overline{L}}} \end{aligned}$$
(24)

where

$$\begin{aligned} \gamma {{\overline{L}}}=\frac{1-b}{\int y \mathcal {K}_Q(dy)}=\left\{ \begin{array}{ll} \frac{1-b}{\theta _b}, &{} \text{ if } \int \frac{Q(dx)}{1-x/S_Q}> b^{-1};\\ \\ \frac{1}{S_Q}, &{} \text{ if } \int \frac{Q(dx)}{1-x/S_Q}\le b^{-1},\end{array} \right. \end{aligned}$$
(25)

and

$$\begin{aligned} \varGamma {{\widehat{L}}}= \frac{1-\beta }{\int y\mathcal {A}_Q(dy)}. \end{aligned}$$

Remark 1

The two inequalities in (24) are not strict in general. Here is an example. By Theorem 1, if \(\int \frac{Q(dx)}{1-x/S_Q}\le b^{-1},\) one can obtain by simple computations that \(\gamma {\overline{L}}=1/S_Q\). For the same reason, if \(\int \frac{Q(dx)}{1-x/S_Q}\le \beta ^{-1}\) almost surely, then \(\varGamma {{\widehat{L}}}=1/S_Q\) almost surely. So taking \(\beta \) and b small enough, the two inequalities in (24) become equalities.

As Kingman’s model is a special kind of the first random model, Corollary 2 applies to Kingman’s model as well. The second inequality in (24) implies that Kingman’s model is easier to have condensation than the first random model in general. This is made more clear in the next Theorem 5.

(3). Comparison Between the First Random Model and the Other Models

For succinctness, the results that we present in this part are only in the case \(h=S_Q\). However all the results can be easily proved for \(h>S_Q\), if we do not stick with strict inequalities. The main idea is to take a new mutant distribution \((1-\frac{1}{n})Q+\frac{1}{n}\delta _h\) and consider the limits of equilibriums as n tends to infinity.

We consider an equilibrium to be fitter if it has higher moments and bigger condensate size. In the following, we provide three theorems on the comparisons of moments and/or condensate sizes.

Theorem 5

Between Kingman’s model and the first random model, if \(\mathbb {P}(\beta =b)<1\), we have

  1. 1.

    in terms of moments,

    $$\begin{aligned} \mathbb {E}\left[ \int y^k\mathcal {I}_Q(dy)\right] < \int y^k \mathcal {K}_Q(dy), \quad \forall \,k=1, 2,\ldots . \end{aligned}$$
  2. 2.

    in terms of condensate size, if \(Q(S_Q)=0\) and \(I_Q>0, a.s.,\) then

    $$\begin{aligned} \mathbb {E}[I_Q]< K_Q. \end{aligned}$$

Theorem 6

Between the two random models, the following inequality holds

$$\begin{aligned} \mathbb {E}\left[ \ln \int y\mathcal {I}_Q(dy)\right] \le \mathbb {E}\left[ \ln \int y\mathcal {A}_Q(dy)\right] . \end{aligned}$$

Theorem 7

Between Kingman’s model and the second random model, it holds that

$$\begin{aligned} \mathbb {E}[A_Q]\ge K_Q, \text { if }Q(S_Q)=0. \end{aligned}$$

But there is no one-way inequality between \(\mathbb {E}[\int y\mathcal {A}_Q(dy)]\) and \( \int y\mathcal {K}_Q(dy)\).

It turns out that the first random model is completely dominated by Kingman’s model in terms of condensate size and moments of all orders of the equilibrium. We conjecture that the first random model is also dominated by the second random model in the same sense, as supported by a different comparison in Theorem 6. The relationship between Kingman’s model and the second random model is more subtle.

4 Perspectives

Recently, the phenomenon of condensation has been studied a lot in the literature. Biaconi et al. [3] argued that the phase transition of condensation phenomenon is very close to Bose-Einstein condensation where a large fraction of a dilute gas of bosons cooled to temperatures very close to absolute zero occupy the lowest quantum state. See also [2] for another model which can be mapped into the physics context. Under some assumptions, Dereich and Mörters [9] studied the limit of the scaled shape of the traveling wave of mass towards the condensation point in Kingman’s model, and the limit turns out to be of the shape of some gamma function. A series of papers [1, 8, 10, 11, 15] were written later on to investigate the shape of traveling wave in other models where condensation appears and have proved that gamma distribution is universal. Park and Krug [16] adapted Kingman’s model to a finite population with unbounded fitness distribution and observed in a particular case emergence of Gaussian distribution as the wave travels to infinity.

The first random model, as a natural random variant of Kingman’s model, provides an interesting example to study condensation in detail. The matrix representation can be a handy tool to study the shape of the traveling wave to verify if the gamma-shape conjecture holds. On the other hand, we can also ask the question: will the relationships between the three models revealed and conjectured in this paper be applicable to other more sophisticated models under the competition of two forces, particularly to those models on the balance of selection and mutation? It is very tempting to say yes. The verification of the universality constitutes a long term project.

5 Proofs

5.1 Proof of Lemma 2

Proof of Lemma 2

Note that

$$\begin{aligned} \frac{xP_{n}^n(dx)}{\int yP_{n}^n(dy)}=\frac{xQ(dx)}{m_1}=\frac{|W_x^{n+1,n}|}{|W^{n+1,n}|}Q(dx). \end{aligned}$$

Assume that for some \(0\le j\le n-1\),

$$\begin{aligned} \frac{xP_{j+1}^n(dx)}{\int y P_{j+1}^n(dy)}=\frac{| W_x^{j+2,n}|}{|W^{j+2,n}|}Q(dx). \end{aligned}$$

Then

$$\begin{aligned} P_{j}^n(dx)=(1-b_{j+1})\frac{|W_x^{j+2,n}|}{|W^{j+2,n}|}Q(dx)+b_{j+1}Q(dx). \end{aligned}$$

Consequently

$$\begin{aligned} \frac{xP_{j}^n(dx)}{\int yP_{j}^n(dy)}&=\frac{(1-b_{j+1})x\frac{|W_x^{j+2,n}|}{| W^{j+2,n}|}+b_{j+1}x}{(1-b_{j+1})\int y\frac{| W_y^{j+2,n}|}{| W^{j+2,n}|}Q(dy)+b_{j+1}m_1}Q(dx)\\&=\frac{\gamma _jx|W_x^{j+2,n} |+x|W^{j+2,n}|}{\gamma _j |U^rW^{j+2,n} |+m_1|W^{j+2,n}|}Q(dx)=\frac{|W_x^{j+1,n}|}{|W^{j+1,n}|}Q(dx). \end{aligned}$$

The last equality is obtained by expanding \(W_x^{j+1,n}\) and \(W^{j+1,n}\) on the first column. By induction, we prove (17). As a consequence, we also get (18). \(\square \)

Lemma 2 allows us to express \(P_j^n\) using \(\{Q^j,Q^{j+1},\ldots ,Q^{n-j}\}\). To write down the explicit expression, we introduce

$$\begin{aligned} \varPhi _{j,l,n}:=\left( \prod _{i=0}^{l-1}\gamma _{i+j}L_{i+j,n}\right) L_{j+l,n}m_{l+1},\quad n\ge j\ge 1, l\ge 0. \end{aligned}$$

Corollary 3

For \((P_j^n)\) with \(P_n^n=Q\)

$$\begin{aligned} P_{j}^n(dx)=\sum _{l=0}^{n-j}C^n_{j,l}Q^{l}(dx),\quad \,\,0\le j\le n-1 \end{aligned}$$
(26)

where \(C^n_{j,0}=b_{j+1};\quad C^n_{j,l}=(1-b_{j+1})\varPhi _{j+2,l-1,n},\quad 1\le l\le n-j.\)

Proof

Let \(0\le j\le n-1\). Note that for any \(1\le l\le n-j\)

$$\begin{aligned} \frac{|W^{j+l,n}|}{|W^{j,n}|}=\prod _{i=0}^{l-1}\frac{|W^{i+j+1,n}|}{|W^{i+j,n}|}=\prod _{i=0}^{l-1}L_{i+j,n}. \end{aligned}$$

Expanding the first row of \(W_x^{j,n}\) and using the above result, we get

$$\begin{aligned} \frac{|W_x^{j,n}|}{|W^{j,n}|}&=\frac{1}{|W^{j,n}|}\sum _{l=1}^{n-j+2}\left( \prod _{i=0}^{l-2}\gamma _{i+j}\right) |W^{j+l,n} |x^{l}&\nonumber \\&=\sum _{l=1}^{n-j+2}\left( \prod _{i=0}^{l-2}\gamma _{i+j}L_{i+j,n}\right) L_{j+l-1,n}x^{l}=\sum _{l=1}^{n-j+2}\varPhi _{j,l-1,n}\frac{x^{l}}{m_{l}}.&\end{aligned}$$
(27)

Then we plug it in (18), changing j to \(j+2\). \(\square \)

5.2 Proof of Lemma 1

We need to prove first a few more results on monotonicity. The following Hölder’s inequality will be frequently used:

$$\begin{aligned} \frac{m_{j+1}}{m_{j+2}}< \frac{m_j}{m_{j+1}}<\frac{1}{m_1}, \quad \forall j\ge 1. \end{aligned}$$
(28)

Lemma 4

For \(j\ge 1, n\ge j-1\), \(R^n_j\) increases strictly in n to \(R_j\in (0,1]\), as

$$\begin{aligned} m_1<R^n_{j}<R^{n+1}_{j}< 1 . \end{aligned}$$

Proof

By Hölder’s inequality, for \( j=n+1\),

$$\begin{aligned} m_1<R^{n}_{n+1}=\frac{m_2}{m_1}<\frac{m_1m_2+\gamma _{n+1}m_3}{m_1^2+\gamma _{n+1}m_2}=R^{n+1}_{n+1}< 1. \end{aligned}$$

Consider \(n\ge j.\) Without loss of generality let \(j=1.\) Using (11)

$$\begin{aligned} R_{1}^n=\frac{|U^rW^{n}|}{|W^{n}|}. \end{aligned}$$

The two matrices \(U^rW^{n}, W^{n}\) differ only on the first row, which is \((m_2, \ldots , m_{n+2})\) for the former, and \((m_1, \ldots , m_{n+1})\) for the latter. Again by Hölder’s inequality, we have

$$\begin{aligned} m_1< R^n_{1}<1,\,\,\forall \,n\ge 1. \end{aligned}$$

For the comparison of \(R^n_{1}\) and \(R^{n+1}_{1}\), we use Lemma 9 in the Appendix where the values \(x_0^n,x_0^{n+1}\) are exactly \(R^n_{1}\) and \(R^{n+1}_{1}\). \(\square \)

Simply applying the above lemma and (12), we obtain the following Corollary.

Corollary 4

For any \(j\ge 1\), \(\gamma _j L_{j,n}\) decreases strictly in n to \(\gamma _j L_{j}\). Define

$$\begin{aligned} \varPhi _{j,l}:=\left( \prod _{i=0}^{l-1}\gamma _{i+j}L_{i+j}\right) L_{j+l}m_{l+1},\quad \forall j\ge 1, l\ge 0. \end{aligned}$$

Then \(\varPhi _{j,l,n}=\varPhi _{j,l}=0\) if \(\gamma _{j+l}=\infty \), otherwise \(\varPhi _{j,l,n}\) decreases strictly in n to \(\varPhi _{j,l}\).

Corollary 5

For any \(j\ge 1, l\ge 1,\) \(R^n_{j,k}\) increases strictly in n to \(R_{j,k}\).

Proof

The case \(k=1\) has been proved by Lemma 4. We consider here \(k\ge 2\). Without loss of generality we let \(j=1.\) The idea is to apply Lemma 8 in the Appendix. Following the notations in Lemma 8 we set

$$\begin{aligned} a_l=\int y^{k+1}Q^l(dy)=\frac{m_{l+k+1}}{m_l},\quad b_l=\int yQ^l(dy)=\frac{m_{l+1}}{m_l}, \quad \forall \,0\le l\le n; \end{aligned}$$

and

$$\begin{aligned} c_l=C^{n-1}_{0,l},\,\,\, c'_l=C^n_{0,l}, \,\,\forall \,0\le l\le n-1; \quad c_n=0,\,\,\, c'_n=C^n_{0,n}. \end{aligned}$$

Then by the definition of \(R_{1,k}^n\) and Lemma 2

$$\begin{aligned} R^{n-1}_{1,k}=\frac{|U_k^rW^{n-1}|}{|W^{n-1}|}=\frac{\int y^{k+1} P_{0}^{n-1}(dy)}{\int y P_{0}^{n-1}(dy)}. \end{aligned}$$
(29)

So by (26)

$$\begin{aligned} R^{n-1}_{1,k}=\frac{\sum _{l=0}^{n}c_la_l}{\sum _{l=0}^{n}c_lb_l},\quad R^{n}_{1,k}=\frac{\sum _{l=0}^{n}c'_la_l}{\sum _{l=0}^{n}c'_lb_l}. \end{aligned}$$

For any \(n\ge 1\), by Hölder’s inequality

$$\begin{aligned} \frac{a_l}{b_l}=\frac{m_{l+k+1}}{m_{l+1}}<\frac{ m_{n+k+1}}{m_{n+1}}=\frac{a_{n}}{b_{n}}, \end{aligned}$$

and

$$\begin{aligned} a_l=\frac{m_{l+k+1}}{m_{l}}<\frac{ m_{n+k+1}}{m_{n}}=a_{n}, \quad b_l=\frac{m_{l+k+1}}{m_{l}}<\frac{ m_{n+k+1}}{m_{n}}=b_{n},\quad \forall \,0\le l\le n-1. \end{aligned}$$

Moreover \(a_0,\ldots , a_n, b_0,\ldots , b_n\) are all strictly positive numbers.

Next we consider the \(c_l\)’s and \(c_l'\)’s. Note that \(c_0=c_0'=b_1\). By Corollary 4, for \(1\le l\le n-1,\) if \(c_l>0\), then \(c_l>c_l'\), otherwise \(c_l=c_l'=0.\) Moreover \(c_n'=C_{0,n}^n=(1-b_1)\frac{m_{n+1}}{m_1}\prod _{i=0}^{n-1}\gamma _iL_{i,n}>0\). So we have the following

$$\begin{aligned} c_i\ge c'_i\ge 0, \quad \forall \,0\le l\le n-1;\quad \,\,0=c_n<c'_n;\quad \,\,\sum _{i=1}^nc_i=\sum _{i=1}^nc'_i=1. \end{aligned}$$

Now we apply Lemma 8 to conclude. \(\square \)

Proof of Lemma 1

As we have already proved Corollaries 4 and 5, it remains to tackle (13) and (15). Expanding \(U_k^rW^{j,n}\) and \(W^{j,n}\) on the first column, we get

$$\begin{aligned} R^n_{j,k}=\frac{|U_k^rW^{j,n}|}{|W^{j,n}|}=\frac{m_{k+1}|W^{j+1,n} |+\gamma _j |U_{k+1}^rW^{j+1,n}|}{m_{k+1}|W^{j+1,n} |+\gamma _j |U^rW^{j+1,n}|}=\frac{m_{k+1}+\gamma _jR^n_{j+1,k+1}}{m_1+\gamma _jR^n_{j+1}}. \end{aligned}$$

Letting \(n\rightarrow \infty \), we obtain (13).

To show (15), without loss of generality, let \(j=1\). By Lemma 4

$$\begin{aligned} m_1<R_{2,1}^n<1. \end{aligned}$$

As \(R_{2,1}^n\) decreases to \(R_{2,1}\), we have also \(R_{2,1}<1\) which gives the strict upper bound for \(R_{2,1}\). Using (12), the above display yields

$$\begin{aligned} \frac{\gamma _1}{m_1+\gamma _1}< \gamma _1L_{1,n}< \frac{\gamma _1}{m_1(1+\gamma _1)}. \end{aligned}$$
(30)

Since \(\gamma _1L_{1,n}\) decreases strictly to \(\gamma _1L_1\), we obtain the following using again (12)

$$\begin{aligned} \gamma _1L_1=\frac{\gamma _1}{m_1+\gamma _1R_{2,1}}<\frac{\gamma _1}{m_1(1+\gamma _1)}. \end{aligned}$$

Then we get \(R_{2,1}>m_1\). Moreover as \(R_{2,1}<1,\)

$$\begin{aligned} \gamma _1L_1=\frac{\gamma _1}{m_1+\gamma _1R_{2,1}}>\frac{\gamma _1}{m_1+\gamma _1}. \end{aligned}$$

So we have found the strict lower and upper bounds for \(R_{2,1}\) and \(\gamma _1L_1\). \(\square \)

5.3 Proofs of Theorem 3 and Corollary 1

For measures \(u,v\in M_1\), we write

$$\begin{aligned} u\le v \end{aligned}$$

if \(u([0,x])\ge v([0,x])\) for any \(x\in [0,1]\).

Proof of Theorem 3

Note that \(Q^j\le Q^{j+1}\) for any j. Then using Corollary 3 and Lemma 1, \(P_j^n\le P_j^{n+1}\). So \(P_j^n\) converges at least weakly to a limit \(\mathcal {H}_j\). The weak convergence allows to obtain (20) from (4). Expanding (20), we obtain

$$\begin{aligned} \mathcal {H}_j(dx)&=H_j\delta _{S_Q}(dx)+b_{j+1}Q(dx)+\sum _{l=1}^{\infty }(1-b_{j+1})\varPhi _{j+2,l-1}Q^{l}(dx),\, \,0\le j< n. \end{aligned}$$
(31)

where \(H_j=1-b_{j+1}-\sum _{l=1}^{\infty }(1-b_{j+1})\varPhi _{j+2,l-1}.\) To prove (21), we firstly use (18) and definition (11) to obtain that

$$\begin{aligned} \int xP_j^n(dx)&=(1-b_{j+1})\frac{|U^rW^{j+2,n}|}{|W^{j+2,n}|} +b_{j+1}m_1\\&=(1-b_{j+1})R_{j+2}^n+b_{j+1}m_1=b_{j+1}(\gamma _{j+1}R_{j+2}^n+m_1)=\frac{b_{j+1}}{L_{j+1,n}}. \end{aligned}$$

A reformulation of the above equality reads

$$\begin{aligned} \frac{1-b_{j+1}}{\int yP_j^n(dy)}=\gamma _jL_{j+1,n}. \end{aligned}$$

Using the convergences as \(n\rightarrow \infty \), we obtain (21). \(\square \)

Proof of Corollary 1

By (19), \({\widetilde{\mathcal {H}}}_j\) is equal in distribution for all j’s. By (20), \({\widetilde{\mathcal {H}}}_j\) is an invariant measure on \([0,S_Q]\) with \(S_{{\widetilde{\mathcal {H}}}_j}=S_Q\) a.s.. Recall that \(\mathcal {I}_{j,S_Q}\) is also invariant on \([0,S_Q]\). Then by Theorem 4 in [19], \({\widetilde{\mathcal {H}}}_j{\mathop {=}\limits ^{d}}\mathcal {I}_{j,S_Q}.\) By (5) and (20), for both sequences, the multi-dimensional distributions are determined in the same way by one dimensional distribution. So the two sequences have the same multi-dimensional distributions, and the multi-dimensional distributions are consistent in each sequence. By Kolmogorov’s extension theorem (Theorem 5.16, [13]), consistent multi-dimensional distributions determine the distribution of the sequence, which yields the identical distribution for both two sequences. \(\square \)

5.4 Proof of Corollary 2

Proof of Corollary 2

Recall that \(\mathbb {E}\left[ \frac{1-\beta }{\int y\mathcal {I}_Q} \right] \) exists and does not depend on the joint law of \(\beta , \mathcal {I}_Q\). Using (21) in the first random model, together with Corollary 1, we can rewrite Theorem 2 into Corollary 2. \(\square \)

5.5 Proof of Lemma 3

Proof of Lemma 3

Since \((\xi _1,\ldots ,\xi _n)\) is exchangeable, we can directly take a symmetric function f and prove the inequality under \(f_{x_1x_2}\le 0.\) For any \(a>b\), we first show that

$$\begin{aligned} f(a,\underbrace{b,\ldots ,b}_{n-1})+f(b,\underbrace{a,\ldots ,a}_{n-1})\ge f(\underbrace{a,\ldots ,a}_{n})+f(\underbrace{b,\ldots ,b}_{n}), \end{aligned}$$

which is proved as follows.

$$\begin{aligned}&f(\underbrace{a,\ldots ,a}_{n})+f(\underbrace{b,\ldots ,b}_{n})-f(a,\underbrace{b,\ldots ,b}_{n-1})-f(b,\underbrace{a,\ldots ,a}_{n-1})\\&\quad =\int _b^a(f_{x_1}(x_1, \underbrace{a,\ldots ,a}_{n-1})-f_{x_1}(x_1, \underbrace{b,\ldots ,b}_{n-1}))dx_1\\&\quad =\sum _{i=2}^n\int _b^a(f_{x_1}(x_1,\underbrace{b,\ldots ,b}_{i-2},a,\underbrace{a,\ldots ,a}_{n-i})-f_{x_1}(x_1,\underbrace{b,\ldots ,b}_{i-2},b,\underbrace{a,\ldots ,a}_{n-i}))dx_1\\&\quad =\sum _{i=2}^n\int _b^a\int _b^af_{x_1x_i}(x_1,\underbrace{b,\ldots ,b}_{i-2},x_i,\underbrace{a,\ldots ,a}_{n-i})dx_1d{x_i}\\&\quad =\sum _{i=2}^n\int _b^a\int _b^af_{x_1x_2}(x_1,x_2,\underbrace{b,\ldots ,b}_{i-2},\underbrace{a,\ldots ,a}_{n-i})dx_1d{x_2}\le 0 \end{aligned}$$

Applying the above proved result, for any \(1\le i\le n-1\),

$$\begin{aligned}&f(\underbrace{\xi _1,\ldots ,\xi _1}_i,\xi _{i+1},\xi _{i+2},\ldots ,\xi _n)+f(\underbrace{\xi _{i+1},\ldots ,\xi _{i+1}}_i,\xi _{1},\xi _{i+2},\ldots ,\xi _n)\\&\quad \ge f(\underbrace{\xi _1,\ldots ,\xi _1}_{i+1},\xi _{i+2},\ldots ,\xi _n)+f(\underbrace{\xi _{i+1},\ldots ,\xi _{i+1}}_{i+1},\xi _{i+2},\ldots ,\xi _n). \end{aligned}$$

Using the above inequality, we obtain

$$\begin{aligned}&\mathbb {E}[f(\underbrace{\xi _1,\ldots ,\xi _1}_i,\xi _{i+1},\xi _{i+2},\ldots ,\xi _n)]\\&\quad =\frac{1}{2}\mathbb {E}[f(\underbrace{\xi _1,\ldots ,\xi _1}_i,\xi _{i+1},\xi _{i+2},\ldots ,\xi _n)+f(\underbrace{\xi _{i+1},\ldots ,\xi _{i+1}}_i,\xi _{1},\xi _{i+2},\ldots ,\xi _n)]\\&\quad \ge \frac{1}{2}\mathbb {E}[f(\underbrace{\xi _1,\ldots ,\xi _1}_{i+1},\xi _{i+2},\ldots ,\xi _n)+f(\underbrace{\xi _{i+1},\ldots ,\xi _{i+1}}_{i+1},\xi _{i+2},\ldots ,\xi _n)]\\&\quad =\mathbb {E}[f(\underbrace{\xi _1,\ldots ,\xi _1}_{i+1},\xi _{i+2},\ldots ,\xi _n)]. \end{aligned}$$

Letting i travel from 1 to \(n-1\), we prove the lemma. \(\square \)

5.6 Proof of Theorem 4

Define

$$\begin{aligned} \varPsi _n:=\frac{\prod _{j=1}^{n}\gamma _j}{|W^n|}, \quad n\ge 1. \end{aligned}$$

Lemma 5

For the three models, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\ln {\overline{\varPsi }}_n}{n}=\ln \gamma {\overline{L}}, \quad \lim _{n\rightarrow \infty }\frac{\ln {\widehat{\varPsi }}_n}{n}=\ln \varGamma {{\widehat{L}}},\quad \lim _{n\rightarrow \infty }\mathbb {E}\left[ \frac{\ln {\widetilde{\varPsi }}_n}{n}\right] =\mathbb {E}\left[ \ln \varGamma _1 {\widetilde{L}}_{1}\right] . \end{aligned}$$

Proof

We prove only the case in the first random model. Note that

$$\begin{aligned} \mathbb {E}[\ln {\widetilde{\varPsi }}_n]&=\mathbb {E}\left[ \ln \left( \frac{1}{m_1}\prod _{j=1}^{n-1} \varGamma _j \frac{|{\widetilde{W}}^{j+1,n}|}{|{\widetilde{W}}^{j,n}|}\right) \right] \\&=\sum _{j=1}^{n-1}\mathbb {E}[\ln (\varGamma _j {\widetilde{L}}_{j,n})]-\ln m_1=\sum _{j=1}^{n-1}\mathbb {E}[\ln (\varGamma _1 {\widetilde{L}}_{1,n-j+1})]-\ln m_1. \end{aligned}$$

Here we use the fact that \(\varGamma _j {\widetilde{L}}_{j,n}{\mathop {=}\limits ^{d}}\varGamma _1 {\widetilde{L}}_{1,n-j+1}.\) Then we apply Lemma 1. \(\square \)

Lemma 6

\(\ln \varPsi _n\) is strictly concave down in every \(b_j, 1\le j\le n\).

Proof

By basic computations we obtain for \(b_j\in (0,1)\),

$$\begin{aligned} \frac{\partial ^2 \ln \varPsi _n}{\partial b_j^2}=\frac{1}{b_j^4}\left( 1/\gamma _j-\frac{d|W^n|}{d\gamma _j}/|W^n|\right) \left( 2b_j-1/\gamma _j-\frac{d|W^n|}{d\gamma _j}/|W^n|\right) . \end{aligned}$$

By Lemma 11 in the Appendix, \(\frac{\partial ^2 \ln \varPsi _n}{\partial b_j^2}<0.\) \(\square \)

Proof of Theorem 4

To prove (24), we can use Lemma 5 and show instead

$$\begin{aligned} \mathbb {E}[\ln {\widehat{\varPsi }}_n]\le \mathbb {E}[\ln {\widetilde{\varPsi }}_n]\le \ln {\overline{\varPsi }}_n. \end{aligned}$$
(32)

For any \(1\le j<i\le n\), due to Proposition 1 in the Appendix,

$$\begin{aligned} \frac{\partial ^2\ln \varPsi _n}{\partial b_i\partial b_j}=-\frac{\partial ^2\ln |W^n|}{\partial b_i\partial b_j}<0. \end{aligned}$$

Then we apply Lemma 3 to obtain the first inequality of (32). Next we apply Lemma 6 and Janson’s inequality for the second inequality of (32). To prove (25), we use (21), and Theorem 1. \(\square \)

5.7 Proof of Theorem 5

We need two preparatory results before proving the theorem.

Lemma 7

For any kn, \(R^n_{1,k}\) is strictly concave down in every \(b_i\), \(1\le i\le n\).

Proof

Let \(b_i\in (0,1)\). Let

$$\begin{aligned} f= |U_k^rW^n |, \, g=|W^n|. \end{aligned}$$

So \(R^n_{1,k}=\frac{f}{g}\). Let \(f',f'',g',g''\) be derivatives with respect to \(\gamma _i\in (0,\infty )\). Then by Corollary 8 in the Appendix

$$\begin{aligned} \frac{dR^n_{1,k}}{d\gamma _i}=\frac{f'g-fg'}{g^2}>0 \end{aligned}$$

Notice that

$$\begin{aligned} \frac{g'}{g}>0,\quad \frac{f''}{g}=\frac{g''}{g}=0. \end{aligned}$$

The above statements are not difficult to see if it is clear how fg can be computed. Or one can refer to Lemma 10 in the Appendix. Then we obtain

$$\begin{aligned} \frac{d^2R^n_{1,k}}{d(\gamma _i)^2}=\frac{f''g-fg''}{g^2}-\frac{2g'}{g}\frac{f'g-fg'}{g^2}=-\frac{2g'}{g}\frac{dR^n_{1,k}}{d\gamma _i}<0. \end{aligned}$$

Moreover,

$$\begin{aligned} \frac{d\gamma _i}{db_i}=\frac{-1}{b_i^2}, \quad \frac{d^2\gamma _i}{d(b_i)^2}=\frac{2}{b_i^3}. \end{aligned}$$

Then

$$\begin{aligned} \frac{d^2R^n_{1,k}}{d(b_i)^2}&=\left( \frac{-1}{b_i^2}\right) ^2\frac{d^2R^n_{k}}{d(\gamma _i)^2}+\frac{2}{b_i^3}\frac{dR^n_{1,k}}{d\gamma _i}\\&=\frac{2(f'g-fg')}{g^2b_i^4}\left( b_i-\frac{g'}{g}\right) =\frac{2}{b_i^4}\frac{dR^n_{1,k}}{d\gamma _i}\left( b_i-\frac{g'}{g}\right) <0, \end{aligned}$$

where the inequality is due to Lemma 11 in the Appendix. \(\square \)

Corollary 6

For \(H_j\) defined in (31), we have

$$\begin{aligned} \frac{H_j}{1-b_{j+1}}=S_Q\gamma _{j+2}L_{j+2}\frac{H_{j+1}}{1-b_{j+2}}, \end{aligned}$$
(33)

and if \(Q(S_Q)=0\),

$$\begin{aligned} \frac{H_j}{1-b_{j+1}}=\lim _{k\rightarrow \infty }S_Q^{-k}R_{j+2,k}. \end{aligned}$$
(34)

Proof

By (20), we obtain

$$\begin{aligned} H_j=\frac{1-b_{j+1}}{\int y \mathcal {H}_{j+1}(dy)}S_QH_{j+1}. \end{aligned}$$

The above display together with (21) lead to (33). If \(Q(S_Q)=0\), then \(\lim _{k\rightarrow \infty }S_Q^{-k}m_{k+1}=0\). Using this fact and (18), we obtain

$$\begin{aligned} H_j&=\mathcal {H}_j(S_Q)=\lim _{k\rightarrow \infty }S_Q^{-k}\int y^k\mathcal {H}_j(dy)\\&=\lim _{k\rightarrow \infty }\lim _{n\rightarrow \infty }S_Q^{-k}\int y^kP_j^n(dy)\\&=\lim _{k\rightarrow \infty }\lim _{n\rightarrow \infty }S_Q^{-k}\left( (1-b_{j+1})R_{j+2,k}^n+b_{j+1}m_{k+1} \right) \\&=(1-b_{j+1})\lim _{k\rightarrow \infty }\lim _{n\rightarrow \infty }S_Q^{-k}R_{j+2,k}^n=(1-b_{j+1})\lim _{k\rightarrow \infty }S_Q^{-k}R_{j+2,k}.&\end{aligned}$$

\(\square \)

Proof of Theorem 5

There are two statements to prove.

1. By (13)

$$\begin{aligned} R_{1,k}=\frac{m_{k+1}+\gamma _1R_{2,k+1}}{m_1+\gamma _1R_{2}}. \end{aligned}$$

By Corollary 8 in the Appendix, \(R_{1,k}\) is strictly increasing in \(\gamma _1.\) Then

$$\begin{aligned} R_{1,k}>\frac{m_{k+1}}{m_1} \end{aligned}$$

implying that

$$\begin{aligned} \frac{m_{k+1}}{R_{2,k+1}}<\frac{m_{1}}{R_{2}}. \end{aligned}$$

The above inequality entails that for \(b_1\in (0,1)\)

$$\begin{aligned} \frac{\partial ^2 R_{1,k} }{\partial b_1^2}=\frac{2}{\left( 1+\frac{m_1}{R_2}-b_1\right) ^3}(1+\frac{m_1}{R_2})\frac{R_{2,k+1}}{R_2}(\frac{m_{k+1}}{R_{2,k+1}}-\frac{m_{1}}{R_{2}})<0. \end{aligned}$$

So \(R_{1,k}\) is strictly concave down in \(b_1.\)

In the following display, the first equality is due to (18) and the first inequality is by the above strict concavity. The second equality is due to Lemma 1 and the second inequality is by Lemma 7. The last equality is a consequence of (18) and Corollary 5.

$$\begin{aligned} \mathbb {E}\left[ \int y^k\mathcal {I}_{S_Q}(dy)\right]&=(1-b)\mathbb {E}[{\widetilde{R}}_{1,k}]+bm_k\\&<(1-b)\mathbb {E}[{\widetilde{R}}_{1,k} |{\beta _1=b}]+bm_k\\&=(1-b)\lim _{n\rightarrow \infty }\mathbb {E}[{\widetilde{R}}^n_{1,k}|{\beta _1=b}]+bm_k\\&\le (1-b)\lim _{n\rightarrow \infty }{\overline{R}}^n_{1,k}+bm_k\\&=\int y^k\mathcal {K}_{Q}(dy).&\end{aligned}$$

2. By Corollary 1, \(I_Q{\mathop {=}\limits ^{d}}{\widetilde{H}}_{0}\). Since \(I_Q>0\) a.s., by assertion 4) of Corollary 4 in [19], we have \(Q(S_Q)=0\). Note that \({\widetilde{H}}_j/(1-\beta _{j+1})\) involves only \(\beta _{j+2},\beta _{j+3},\ldots .\) Then by (33),

$$\begin{aligned} \mathbb {E}[I_Q]=\mathbb {E}[{\widetilde{H}}_0]&=\mathbb {E}\left[ (1-\beta _1)\frac{{\widetilde{H}}_0}{1-\beta _1}\right] \\&=(1-b)\mathbb {E}\left[ \frac{{\widetilde{H}}_0}{1-\beta _1}\right] =(1-b)S_Q\mathbb {E}\left[ \varGamma _{2}{\widetilde{L}}_2\frac{{\widetilde{H}}_1}{1-\beta _{2}}\right] . \end{aligned}$$

Moreover for \(b_2\in (0,1)\)

$$\begin{aligned} \gamma _2L_2=\frac{1-b_2}{b_2m_1+(1-b_2)R_{3,1}} \end{aligned}$$

and by (15)

$$\begin{aligned} \frac{\partial ^2\gamma _2L_2}{\partial b_2^2}=\frac{2m_1(m_1-R_{3,1})}{(b_2m_1+(1-b_2)R_{3,1})^3}<0. \end{aligned}$$

So the function \(\gamma _{2}L_2\frac{H_1}{1-b_{2}}\) is strictly concave down on \(b_2\), as \(\frac{H_1}{1-b_{2}}\) does not depend on \(b_2\). Using (34) and the above strict concavity, together with Lemma 7,

$$\begin{aligned} \mathbb {E}[{\widetilde{H}}_0]&=(1-b)\mathbb {E}\left[ \frac{{\widetilde{H}}_0}{1-\beta _1}\right] <(1-b)\mathbb {E}\left[ \frac{{\widetilde{H}}_0}{1-\beta _1} \Big |\beta _2=b\right] \\&=(1-b)\lim _{k\rightarrow \infty }S_Q^{-k}\mathbb {E}[{\widetilde{R}}_{2,k}| {\beta _2=b}]\\&= (1-b)\lim _{k\rightarrow \infty }\lim _{n\rightarrow \infty }S_Q^{-k}\mathbb {E}[{\widetilde{R}}^n_{2,k}|{\beta _2=b}]\\&\quad \quad \le (1-b)\lim _{k\rightarrow \infty }\lim _{n\rightarrow \infty }S_Q^{-k}\mathbb {E}[{\widetilde{R}}^n_{2,k}|{\beta _i=b, \forall i\ge 2}]=(1-b)\frac{{\overline{H}}_0}{1-b}={\overline{H}}_0.&\end{aligned}$$

\(\square \)

5.8 Proof of Theorem 6

Proof of Theorem 6

Note that similarly as in the proof of Lemma 5

$$\begin{aligned} \mathbb {E}\left[ \ln \left( |{\widetilde{W}}^n|\prod _{j=1}^n\beta _j\right) \right]&=\mathbb {E}\left[ \ln \left( \frac{1}{m_1}\prod _{j=1}^{n-1} \beta _j \frac{|{\widetilde{W}}^{j,n}|}{| {\widetilde{W}}^{j+1,n} |}\right) \right] =\sum _{j=1}^{n-1}\mathbb {E}\left[ \ln \frac{\beta _1}{{\widetilde{L}}_{1,n-j+1}}\right] -\ln m_1. \end{aligned}$$

For the second random model, similarly

$$\begin{aligned} \mathbb {E}\left[ \ln \left( |{{\widehat{W}}}^n|\beta ^n\right) \right] =\sum _{j=1}^{n-1}\mathbb {E}\left[ \ln \frac{\beta }{{{\widehat{L}}}_{1,n-j+1}}\right] -\ln m_1. \end{aligned}$$

By Lemma 13 and (21),

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {E}\left[ \ln \left( |{\widetilde{W}}^n|\prod _{j=1}^n\beta _j\right) \right] /n=\mathbb {E}\left[ \ln \frac{\beta _1}{ {\widetilde{L}}_1}\right] =\mathbb {E}\left[ \ln \int y \mathcal {I}_Q(dy)\right] , \end{aligned}$$
(35)

and

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}\left[ \ln \left( |{{\widehat{W}}}^n|\beta ^n\right) \right] /n=\mathbb {E}\left[ \ln \frac{\beta }{ {{\widehat{L}}}}\right] =\mathbb {E}\left[ \ln \int y \mathcal {A}_Q(dy)\right] . \end{aligned}$$
(36)

We compare next \(\mathbb {E}\left[ \ln \left( |{\widetilde{W}}^n|\prod _{j=1}^n\beta _j\right) \right] \) and \(\mathbb {E}\left[ \ln \left( |{{\widehat{W}}}^n|\beta ^n\right) \right] .\) Note that

$$\begin{aligned} \ln \left( | W^n|\prod _{j=1}^nb_i\right) =\ln |W^n|+\sum _{j=1}^n\ln b_j. \end{aligned}$$

Then second order partial derivative of \(\ln \left( | W^n|\prod _{j=1}^nb_i\right) \) with respect to \(b_s,b_t\) equals \(\frac{\partial ^2\ln |W^n|}{\partial b_s\partial b_t}\) which is, by Lemma 11 in the Appendix, strictly positive for any \(1\le s\ne t\le n\). Applying Lemma 3, we obtain

$$\begin{aligned} \mathbb {E}\left[ \ln \left( |{\widetilde{W}}^n|\prod _{j=1}^n\beta _j\right) \right] \le \mathbb {E}\left[ \ln \left( |{{\widehat{W}}}^n|\beta ^n\right) \right] . \end{aligned}$$

Then by (35) and (36) we conclude that

$$\begin{aligned} \mathbb {E}\left[ \ln \int y \mathcal {I}_Q(dy)\right] \le \mathbb {E}\left[ \ln \int y \mathcal {A}_Q(dy)\right] . \end{aligned}$$

\(\square \)

5.9 Proof of Theorem 7

Proof of Theorem 7

By Theorem 1,

$$\begin{aligned} K_Q=\left\{ \begin{array}{lr} 1-\int \frac{bQ(dx)}{1-x/S_Q}, &{} \text{ if } \frac{Q(dx)}{1-x/S_Q}< b^{-1};\\ &{} \\ 0, &{} \text{ if } \frac{Q(dx)}{1-x/S_Q}\ge b^{-1}. \end{array} \right. \end{aligned}$$

So \(K_Q\) is a concave up function of b, and consequently \(\mathbb {E}[A_Q]\ge K_Q.\)

To show that there is no one-way inequality between \(\mathbb {E}[\int y\mathcal {A}_Q(dy)]\) and \( \int y\mathcal {K}_Q(dy)\), we give a concrete example. Let \(Q(dx)=dx\). In this case, \(\int \frac{Q(dx)}{1-x/S_Q}=\int \frac{Q(dx)}{1-x}=\infty > b^{-1}\) for any \(b\in (0,1).\) By (25)

$$\begin{aligned} \int y\mathcal {K}_Q(dy)=\theta _b \end{aligned}$$

which satisfies equation

$$\begin{aligned} \int \frac{b \theta _bdx}{\theta _b-(1-b)x}=1. \end{aligned}$$

We show that \(\frac{d^2\theta _b}{db^2}\) can be strictly positive and negative for different \(b's.\) The above equation can be rewritten as

$$\begin{aligned} \int \frac{bdx}{1-tx}=1 \end{aligned}$$

with \(t=\frac{1-b}{\theta _b}\in (0,1)\) strictly decreasing in b. Then

$$\begin{aligned} b=-\frac{t}{\ln (1-t)},\quad \theta _b=\frac{1}{t}+\frac{1}{\ln (1-t)}. \end{aligned}$$

So

$$\begin{aligned} \frac{d\theta _b}{db}=\frac{d\theta _b/dt}{db/dt}=\frac{-(1-t)\ln ^2(1-t)+t^2}{-(1-t)t^2\ln (1-t)-t^3}=\frac{m(t)}{n(t)} \end{aligned}$$

with m(t) the numerator and n(t) the denominator. Then

$$\begin{aligned} \frac{d^2\theta _b}{db^2}=\frac{d(d\theta _b/db)}{dt}/\frac{db}{dt}=\frac{m'(t)n(t)-m(t)n'(t)}{n(t)^2\frac{db}{dt}} \end{aligned}$$

where

$$\begin{aligned}&m'(t)n(t)-m(t)n'(t)\\&=-2t(1-t)^2\ln ^3(1-t)+(-4t^2+3t^3)\ln ^2(1-t)-t^3(2+t)\ln (1-t)\\&=5t^6+O(t^7), \quad t\rightarrow 0.&\end{aligned}$$

As \(n(t)^2>0\) and \(\frac{db}{dt}<0\) for any \(t\in (0,1)\), we have \(\frac{d^2\theta _b^2}{db^2}>0\) for t small enough. However \(m'(0.5)n(0.5)-m(0.5)n'(0.5)= -4.184810^{-4}<0\), implying \(\frac{d^2\theta _b^2}{db^2}<0\) at \(t=0.5\). As t is a strictly decreasing function of b, we have shown that \(\frac{d^2\theta _b^2}{db^2}\) can be strictly positive and negative at different b’s. \(\square \)