Appendix 1: Recruitment probability
Here we derive the probability that species \(i\) in competition with \(n\) other species recruits a vacant site. The derivation, which is based on standard properties of Poisson random variables, is not new, but seems to be unfamiliar to many ecologists.
We consider a vacant site. The number of propagules (e.g., seeds) of species \(i\) reaching the vacant site is modelled by a Poisson random variable. We denote this number by \(s_i\). The parameter of the Poisson random variable is equal to \(f_i p_i\). That is, the probability of dispersing \(s_i\) seeds to the vacant site is
$$\begin{aligned} P(s_i;f_i p_i) = \mathrm{e}^{-f_ip_i} \frac{(f_ip_i)^{s_i}}{s_i!}. \end{aligned}$$
Species \(i\) competes with \(n\) other species. We label these species by \(j_1,j_2,\ldots ,j_n\). Conditional on the number of propagules \(s_i, s_{j_1}, \ldots , s_{j_n}\), the probability that species \(i\) wins the competition for the vacant site is
$$\begin{aligned}&{\text{ Prob }}\big (\text{ species } i \text{ occupies } \text{ vacant } \text{ site } \,\big |\,s_i,(s_{j_k})\big ) \\&\quad = {\left\{ \begin{array}{ll} \frac{\textstyle s_i}{\textstyle s_i + \sum _k s_{j_k}} &{} \text{ if } s_i > 0\\ 0 &{} \text{ if } s_i = 0. \end{array}\right. } \end{aligned}$$
We average this expression with respect to the Poisson distributions for \(s_i\) and \(s_{j_k}\),
$$\begin{aligned}&{\text{ Prob }}\big (\text{ species } i \text{ occupies } \text{ vacant } \text{ site }\big ) \\&\quad = \sum _{s_i=1}^\infty P(s_i;f_ip_i) \sum _{(s_{j_k})} \left( \prod _k P(s_{j_k};f_{j_k}p_{j_k}) \right) \frac{s_i}{s_i + \sum _k s_{j_k}} \\&\quad = \sum _{s_i=1}^\infty P(s_i;f_ip_i) \sum _{s_j=0}^\infty P(s_j;{\textstyle \sum _{k} f_{j_k}p_{j_k}}) \frac{s_i}{s_i + s_j} \\&\quad = f_i p_i \sum _{s_i=1}^\infty \mathrm{e}^{-f_ip_i} \frac{(f_ip_i)^{s_i-1}}{(s_i-1)!} \sum _{s_j=0}^\infty P(s_j;{\textstyle \sum _{k} f_{j_k}p_{j_k}}) \frac{1}{s_i + s_j} \\&\quad = f_i p_i \sum _{s_i=0}^\infty P(s_i;f_ip_i) \sum _{s_j=0}^\infty P(s_j;{\textstyle \sum _{k} f_{j_k}p_{j_k}}) \frac{1}{s_i + s_j + 1} \\&\quad = f_i p_i \sum _{s=0}^\infty P(s;f_ip_i + {\textstyle \sum _{k} f_{j_k}p_{j_k}}) \frac{1}{s + 1} \end{aligned}$$
where we have used that the sum of Poisson random variables is again a Poisson random variable in the second equality and in the last equality. Because
$$\begin{aligned} \sum _{s=0}^\infty P(s;\lambda ) \frac{1}{s + 1}&= \mathrm{e}^{-\lambda } \sum _{s=0}^\infty \frac{\lambda ^s}{(s+1)!} \\&= \frac{\mathrm{e}^{-\lambda }}{\lambda } \sum _{s=0}^\infty \frac{\lambda ^{s+1}}{(s+1)!} \\&= \frac{\mathrm{e}^{-\lambda }}{\lambda } \big (\mathrm{e}^\lambda - 1\big ) \\&= \frac{1 - \mathrm{e}^{-\lambda }}{\lambda } \\&= L(\lambda ), \end{aligned}$$
where the last line is the definition of the function \(L\), we have
$$\begin{aligned}&{\text{ Prob }}\big (\text{ species } i \text{ occupies } \text{ vacant } \text{ site }\big ) \\&\quad = f_i p_i \, L\left( {\textstyle f_ip_i + \sum \limits _{k} f_{j_k}p_{j_k}}\right) . \end{aligned}$$
The probability that no species occupies the vacant site, that is, the probability that the vacant site remains empty, is
$$\begin{aligned}&{\text{ Prob }}\big (\text{ no } \text{ species } \text{ occupies } \text{ vacant } \text{ site }\big ) \\&\quad = P(0;f_ip_i) \prod _k P(0;f_{j_k}p_{j_k}) \\&\quad = e^{-\left( f_ip_i + \sum \limits _{k} f_{j_k}p_{j_k}\right) }. \end{aligned}$$
Appendix 2: Proof of Result 1
We consider a subset \(\mathcal{C }\) of the set of all species \(\mathcal{S }\), and study whether there exists an equilibrium of dynamical system (6) in which the species in \(\mathcal{C }\) are present and the species in \(\mathcal{S }{\setminus }\mathcal{C }\) are absent. We write the dynamical system as
$$\begin{aligned} p_i^+ = F_i(\varvec{p}) f_i p_i \qquad i\in \mathcal{S }, \end{aligned}$$
(23a)
with
$$\begin{aligned} F_i(\varvec{p}) = \sum _{n=i}^{S-1} (h_n-h_{n+1})\, L\left( \sum _{m=1}^n f_m p_m \right) + h_S\, L\left( \sum _{m=1}^S f_m p_m \right) \end{aligned}$$
(23b)
and
$$\begin{aligned} L(x) = \frac{1-\mathrm{e}^{-x}}{x}. \end{aligned}$$
An equilibrium \(\widetilde{\varvec{p}}\) of (23) has to satisfy
$$\begin{aligned} \widetilde{p}_i = F_i(\widetilde{\varvec{p}}) f_i \widetilde{p}_i \qquad \text{ for } \text{ all } i\in \mathcal{S }. \end{aligned}$$
(24)
Equation (24) is trivially satisfied for \(i \in \mathcal{S }{\setminus }\mathcal{C }\), that is, for species that are absent at equilibrium (\(\widetilde{p}_i = 0\)). For species that are present at equilibrium (\(\widetilde{p}_i > 0\)), Eq. (24) simplifies to
$$\begin{aligned} F_{i_k}(\widetilde{\varvec{p}}) = g_{i_k} \qquad \text{ for } \text{ all } k=1,2,\ldots ,C. \end{aligned}$$
(25)
Preliminary computation
We derive an expression for \(F_i(\widetilde{\varvec{p}})\). As we will need a more general result below, we consider the quantity \(G_i\) which is equal to \(F_i(\widetilde{\varvec{p}})\) in which the function \(L\) is replaced by an arbitrary function \(K\),
$$\begin{aligned} G_i = \sum _{n=i}^{S-1} (h_n-h_{n+1})\, K\left( \sum _{m=1}^n f_m \widetilde{p}_m \right) + h_S\, K\left( \sum _{m=1}^S f_m \widetilde{p}_m \right) . \end{aligned}$$
(26)
For a species \(i\) with \(i < i_C\), we determine the index \(k\) such that \(i_{k} \le i < i_{k+1}\) with \(k \in \{1,2,\ldots ,C-1\}\) and set \(k=0\) if \(i < i_1\). We prove that
$$\begin{aligned} G_i = {\left\{ \begin{array}{ll} \begin{aligned} &{} (h_i-h_{i_{k+1}})\, K\left( \sum _{j=1}^{k} f_{i_j} \widetilde{p}_{i_j} \right) \\ &{} + \sum _{\ell =k+1}^{C-1} (h_{i_\ell }-h_{i_{\ell +1}})\, K\left( \sum _{j=1}^\ell f_{i_j} \widetilde{p}_{i_j} \right) \\ &{} + h_{i_C}\, K\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) \end{aligned} &{} \qquad \text{ if } i < i_C \\ \displaystyle h_i\, K\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) &{} \qquad \text{ if } i \ge i_C. \end{array}\right. } \end{aligned}$$
(27)
For \(i < i_C\) we determine the index \(k\) as above and we compute
$$\begin{aligned} F_i(\widetilde{\varvec{p}})&= \sum _{n=i}^{S-1} (h_n-h_{n+1})\, K\left( \sum _{m=1}^n f_m \widetilde{p}_m \right) + h_S\, K\left( \sum _{m=1}^S f_m \widetilde{p}_m \right) \\&= \sum _{n=i}^{i_{k+1}-1} (h_n-h_{n+1})\, K\left( \sum _{m=1}^n f_m \widetilde{p}_m \right) \\&+ \sum _{\ell =k+1}^{C-1} \sum _{n=i_\ell }^{i_{\ell +1}-1} (h_n-h_{n+1})\, K\left( \sum _{m=1}^n f_m \widetilde{p}_m \right) \\&+ \sum _{n=i_C}^{S-1} (h_n-h_{n+1})\, K\left( \sum _{m=1}^n f_m \widetilde{p}_m \right) + h_S\, K\left( \sum _{m=1}^S f_m \widetilde{p}_m \right) \\&= \sum _{n=i}^{i_{k+1}-1} (h_n-h_{n+1})\, K\left( \sum _{j=1}^{k} f_{i_j} \widetilde{p}_{i_j} \right) \\&+ \sum _{\ell =k+1}^{C-1} \sum _{n=i_\ell }^{i_{\ell +1}-1} (h_n-h_{n+1})\, K\left( \sum _{j=1}^\ell f_{i_j} \widetilde{p}_{i_j} \right) \\&+ \sum _{n=i_C}^{S-1} (h_n-h_{n+1})\, K\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) + h_S\, K\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) \\&= \bigg ( \sum _{n=i}^{i_{k+1}-1} (h_n-h_{n+1}) \bigg )\, K\left( \sum _{j=1}^{k} f_{i_j} \widetilde{p}_{i_j} \right) \\&+ \sum _{\ell =k+1}^{C-1} \bigg ( \sum _{n=i_\ell }^{i_{\ell +1}-1} (h_n-h_{n+1}) \bigg )\, K\left( \sum _{j=1}^\ell f_{i_j} \widetilde{p}_{i_j} \right) \\&+ \bigg ( \sum _{n=i_C}^{S-1} (h_n-h_{n+1}) + h_S \bigg ) \, K\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) \\&= (h_i-h_{i_{k+1}})\, K\left( \sum _{j=1}^{k} f_{i_j} \widetilde{p}_{i_j} \right) \\&+ \sum _{\ell =k+1}^{C-1} (h_{i_\ell }-h_{i_{\ell +1}})\, K\left( \sum _{j=1}^\ell f_{i_j} \widetilde{p}_{i_j} \right) \\&+ h_{i_C}\, K\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) . \end{aligned}$$
A similar computation holds for \(i \ge i_C\), proving (27).
Equilibrium conditions
Putting \(K=L\) and \(i=i_k\) in (27) we get
$$\begin{aligned} F_{i_k}(\widetilde{\varvec{p}}) = \sum _{\ell =k}^{C-1} (h_{i_\ell }-h_{i_{\ell +1}})\, L\left( \sum _{j=1}^\ell f_{i_j} \widetilde{p}_{i_j} \right) + h_{i_C}\, L\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) . \end{aligned}$$
(28)
Hence, the equilibrium conditions (25) read
$$\begin{aligned}&\sum _{\ell =k}^{C-1} (h_{i_\ell }-h_{i_{\ell +1}})\, L\left( \sum _{j=1}^\ell f_{i_j} \widetilde{p}_{i_j} \right) + h_{i_C}\, L\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) = g_{i_k} \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text{ for } \text{ all } k=1,2,\ldots ,C-1 \end{aligned}$$
(29a)
$$\begin{aligned}&h_{i_C}\, L\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) = g_{i_C}. \end{aligned}$$
(29b)
Note that the parameters \(h_i\) and \(f_i\) with \(i \in \mathcal{S }{\setminus }\mathcal{C }\) have been eliminated from (29).
Rearranging (29) we get
$$\begin{aligned}&L\left( \sum _{j=1}^k f_{i_j} \widetilde{p}_{i_j} \right) = \frac{g_{i_k}-g_{i_{k+1}}}{h_{i_k}-h_{i_{k+1}}} \qquad \text{ for } \text{ all } k=1,2,\ldots ,C-1 \end{aligned}$$
(30a)
$$\begin{aligned}&L\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) = \frac{g_{i_C}}{h_{i_C}}. \end{aligned}$$
(30b)
We have to invert the function \(L\) to solve for the occupancies \(\widetilde{p}_i\). Because \(L\) maps the positive real line \((0,\infty )\) to the real interval \((0,1)\), the inversion is possible if and only if
$$\begin{aligned} 0 < \frac{g_{i_k}-g_{i_{k+1}}}{h_{i_k}-h_{i_{k+1}}}&< 1 \qquad \text{ for } \text{ all } k=1,2,\ldots ,C-1 \end{aligned}$$
(31a)
$$\begin{aligned} \frac{g_{i_C}}{h_{i_C}}&< 1, \end{aligned}$$
(31b)
where we have used Assumption 1 to discard parameter combinations for which one or more of these inequalities is replaced by an equality.
After inverting the function \(L\), the solutions for the occupancies \(\widetilde{p}_i\) should be positive. Because \(L\) is decreasing, this condition is satisfied if and only if
$$\begin{aligned} \frac{g_{i_C}}{h_{i_C}} < \frac{g_{i_{C-1}}-g_{i_C}}{h_{i_{C-1}}-h_{i_C}} < \cdots < \frac{g_{i_2}-g_{i_3}}{h_{i_2}-h_{i_3}} < \frac{g_{i_1}-g_{i_2}}{h_{i_1}-h_{i_2}}, \end{aligned}$$
(32)
where we have used Assumption 1 to discard cases in which equality rather than inequality holds.
Combining (31) and (32) we get the following conditions:
$$\begin{aligned} \frac{g_{i_C}}{h_{i_C}} < \frac{g_{i_{C-1}}-g_{i_C}}{h_{i_{C-1}}-h_{i_C}} < \cdots < \frac{g_{i_2}-g_{i_3}}{h_{i_2}-h_{i_3}} < \frac{g_{i_1}-g_{i_2}}{h_{i_1}-h_{i_2}} < 1, \end{aligned}$$
(33)
which proves the first statement of Result 1.
If these conditions are satisfied, the equilibrium occupancies \(\widetilde{p}_i\) are given by
$$\begin{aligned}&\sum _{j=1}^k f_{i_j} \widetilde{p}_{i_j} = L^{-1}\left( \frac{g_{i_k}-g_{i_{k+1}}}{h_{i_k}-h_{i_{k+1}}} \right) \qquad k=1,2,\ldots ,C-1 \end{aligned}$$
(34a)
$$\begin{aligned}&\sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} = L^{-1}\left( \frac{g_{i_C}}{h_{i_C}} \right) , \end{aligned}$$
(34b)
with \(L^{-1}\) the inverse function of \(L\). Rewriting Eqs. (34) we get
$$\begin{aligned}&\widetilde{p}_{i_1} = g_{i_1} \; L^{-1}\bigg (\frac{g_{i_{1}}-g_{i_{2}}}{h_{i_{1}}-h_{i_{2}}}\bigg ) \end{aligned}$$
(35a)
$$\begin{aligned}&\widetilde{p}_{i_k} \!=\! g_{i_k} \left( L^{-1}\bigg (\frac{g_{i_{k}}-g_{i_{k+1}}}{h_{i_{k}}-h_{i_{k+1}}}\bigg )\!-\! L^{-1}\bigg (\frac{g_{i_{k-1}}-g_{i_{k}}}{h_{i_{k-1}}-h_{i_{k}}}\bigg ) \right) \quad k \!=\! 2,3,\ldots ,C\!-\!1\quad \end{aligned}$$
(35b)
$$\begin{aligned}&\widetilde{p}_{i_C} = g_{i_C} \left( L^{-1}\bigg (\frac{g_{i_{C}}}{h_{i_{C}}}\bigg ) - L^{-1}\bigg (\frac{g_{i_{C-1}}-g_{i_{C}}}{h_{i_{C-1}}-h_{i_{C}}}\bigg ) \right) . \end{aligned}$$
(35c)
Recall the definition \(L(x) = \frac{1-\mathrm{e}^{-x}}{x}\). The function \(\Lambda \) is defined as the inverse function of \(x \mapsto \frac{x}{1-\mathrm{e}^{-x}}\). We have
$$\begin{aligned} y=\Lambda (x) \ \Leftrightarrow \ x = \frac{1}{L(y)} \ \Leftrightarrow \ \frac{1}{x} = L(y) \ \Leftrightarrow \ y = L^{-1}\left( \frac{1}{x}\right) . \end{aligned}$$
Hence, \(L^{-1}(x) = \Lambda \big (\frac{1}{x}\big )\). Using this identity in (35), we obtain the second statement of Result 1.
The function \(\Lambda \) can be expressed in terms of the Lambert \(W\) function. We have
$$\begin{aligned} y=\Lambda (x)&\Leftrightarrow x = \frac{y}{1-\mathrm{e}^{-y}} = \frac{y\,\mathrm{e}^y}{\mathrm{e}^y-1} \\&\Leftrightarrow y\,\mathrm{e}^y = x\,\big (\mathrm{e}^y-1\big ) \\&\Leftrightarrow (y-x)\,\mathrm{e}^y= -x \\&\Leftrightarrow (y-x)\,\mathrm{e}^{y-x} = -x\,\mathrm{e}^{-x} \\&\Leftrightarrow y-x = W_0(-x\,\mathrm{e}^{-x}), \end{aligned}$$
with \(W_0\) the upper branch of the Lambert \(W\) function. Hence,
$$\begin{aligned} \Lambda (x) = x + W_0(-x\,\mathrm{e}^{-x}). \end{aligned}$$
(36)
Similarly, for the function \(L^{-1}\),
$$\begin{aligned} y=L^{-1}(x)&\Leftrightarrow x = \frac{1-\mathrm{e}^{-y}}{y} = \frac{\mathrm{e}^y-1}{y\,\mathrm{e}^y} \\&\Leftrightarrow y\,\mathrm{e}^y = \frac{1}{x}\,\big (\mathrm{e}^y-1\big ) \\&\Leftrightarrow \left( y-\frac{1}{x}\right) \,\mathrm{e}^y = -\frac{1}{x} \\&\Leftrightarrow \left( y-\frac{1}{x}\right) \,\mathrm{e}^{y-\frac{1}{x}} = -\frac{1}{x}\,\mathrm{e}^{-\frac{1}{x}} \\&\Leftrightarrow y-\frac{1}{x} = W_0\left( -\frac{1}{x}\,\mathrm{e}^{-\frac{1}{x}}\right) , \end{aligned}$$
so that
$$\begin{aligned} L^{-1}(x) = \frac{1}{x} + W_0\left( -\frac{1}{x}\,\mathrm{e}^{-\frac{1}{x}}\right) . \end{aligned}$$
(37)
Note that Eqs. (36) and (37) satisfy \(L^{-1}(x) = \Lambda \big (\frac{1}{x}\big )\).
Because we have derived an explicit expression for the equilibrium \(\widetilde{\varvec{p}}\), we have also established the third statement of Result 1.
Appendix 3: Proof of Result 2
We consider an equilibrium \(\widetilde{\varvec{p}}\) of dynamical system (6) and study its local stability. We denote by \(\mathcal{C }\) the subset of the set of all species \(\mathcal{S }\) that are present in the equilibrium \(\widetilde{\varvec{p}}\).
The equilibrium \(\widetilde{\varvec{p}}\) is locally asymptotically stable if and only if the eigenvalues of the Jacobian matrix evaluated at \(\widetilde{\varvec{p}}\) lie inside the unit circle of the complex plane. Using the notation of (23), the Jacobian matrix \(J\) evaluated at \(\widetilde{\varvec{p}}\) is
$$\begin{aligned} J&= \quad \begin{pmatrix} f_1 F_1(\widetilde{\varvec{p}}) &{} 0 &{} \cdots &{} 0 \\ 0 &{} f_2 F_2(\widetilde{\varvec{p}}) &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} &{} \vdots \\ 0 &{} 0 &{} \cdots &{} f_S F_S(\widetilde{\varvec{p}}) \end{pmatrix} \nonumber \\&+\begin{pmatrix} f_1\widetilde{p}_1\,\frac{\partial {F_1}}{\partial {p_1}}(\widetilde{\varvec{p}}) &{} f_1\widetilde{p}_1\,\frac{\partial {F_1}}{\partial {p_2}}(\widetilde{\varvec{p}}) &{} \cdots &{} f_1\widetilde{p}_1\,\frac{\partial {F_1}}{\partial {p_S}}(\widetilde{\varvec{p}}) \\ f_2\widetilde{p}_2\,\frac{\partial {F_2}}{\partial {p_1}}(\widetilde{\varvec{p}}) &{} f_2\widetilde{p}_2\,\frac{\partial {F_2}}{\partial {p_2}}(\widetilde{\varvec{p}}) &{} \cdots &{} f_2\widetilde{p}_2\,\frac{\partial {F_2}}{\partial {p_S}}(\widetilde{\varvec{p}}) \\ \vdots &{} \vdots &{} &{} \vdots \\ f_S\widetilde{p}_S\,\frac{\partial {F_S}}{\partial {p_1}}(\widetilde{\varvec{p}}) &{} f_S\widetilde{p}_S\,\frac{\partial {F_S}}{\partial {p_2}}(\widetilde{\varvec{p}}) &{} \cdots &{} f_S\widetilde{p}_S\,\frac{\partial {F_S}}{\partial {p_S}}(\widetilde{\varvec{p}}) \end{pmatrix}. \end{aligned}$$
(38)
Consider a species \(j\) that is absent at equilibrium, \(j \in \mathcal{S }{\setminus }\mathcal{C }\) and \(\widetilde{p}_j = 0\). Row \(j\) of the second term of (38) is zero, so that row \(j\) of the Jacobian matrix \(J\) is zero except for the diagonal element \(f_j F_j(\widetilde{\varvec{p}})\). Hence, by applying the permutation \(P\) that shifts the indices in \(\mathcal{C }\) in front of the indices in \(\mathcal{S }{\setminus }\mathcal{C }\), we get a block matrix
$$\begin{aligned} P^\top J P = \begin{pmatrix} J_\mathcal{C }&{} K \\ 0 &{} J_{\mathcal{S }{\setminus }\mathcal{C }} \end{pmatrix}, \end{aligned}$$
(39)
with \(J_\mathcal{C }\) a \(C \times C\) matrix, \(K\) a \(C \times (S\!-\!C)\) matrix, and \(J_{\mathcal{S }{\setminus }\mathcal{C }}\) a diagonal \((S\!-\!C) \times (S\!-\!C)\) matrix. It follows that the eigenvalues of \(J\) are equal to the union of the eigenvalues of \(J_\mathcal{C }\) and the eigenvalues of \(J_{\mathcal{S }{\setminus }\mathcal{C }}\).
Eigenvalues of \(J_\mathcal{S {\setminus }\mathcal C }\)
Because the matrix \(J_{\mathcal{S }{\setminus }\mathcal{C }}\) is diagonal, its eigenvalues are given by its diagonal elements. The diagonal elements are equal to \(f_j F_j(\widetilde{\varvec{p}})\) with \(j \in \mathcal{S }{\setminus }\mathcal{C }\), implying that the eigenvalues of \(J_{\mathcal{S }{\setminus }\mathcal{C }}\) are real and positive. To obtain the stability conditions, we look for conditions equivalent with \(f_j F_j(\widetilde{\varvec{p}}) < 1\) for all \(j \in \mathcal{S }{\setminus }\mathcal{C }\), that is, for all \(j \in \mathcal{S }\) excluding \(i_1,i_2,\ldots ,i_C\).
First, we consider a species \(j\) for which \(i_k < j < i_{k+1}\) with \(k = 1,2,\ldots C-1\). Expression (27) with \(K=L\) and \(i=j\) gives
$$\begin{aligned} F_j(\widetilde{\varvec{p}})&= (h_j-h_{i_{k+1}})\, L\left( \sum _{m=1}^k f_{i_m} \widetilde{p}_{i_m} \right) \\&+ \sum _{\ell =k+1}^{C-1} (h_{i_\ell }-h_{i_{\ell +1}})\, L\left( \,\,\sum _{m=1}^\ell f_{i_m} \widetilde{p}_{i_m} \right) + h_{i_C}\, L\left( \,\, \sum _{m=1}^C f_{i_m} \widetilde{p}_{i_m} \right) . \end{aligned}$$
We use Eq. (30a) for the first term of the right-hand side and Eq. (29a) for the second and third term of the right-hand side. This leads to
$$\begin{aligned} F_j(\widetilde{\varvec{p}}) = (h_j-h_{i_{k+1}})\, \frac{g_{i_k}-g_{i_{k+1}}}{h_{i_k}-h_{i_{k+1}}} + g_{i_{k+1}}, \end{aligned}$$
so that the condition \(f_j F_j(\widetilde{\varvec{p}}) < 1\) is equivalent with
$$\begin{aligned} (h_j-h_{i_{k+1}})\, \frac{g_{i_k}-g_{i_{k+1}}}{h_{i_k}-h_{i_{k+1}}} < g_j - g_{i_{k+1}}. \end{aligned}$$
Using that \(h_{i_k} > h_j > h_{i_{k+1}}\) (because \(i_k < j < i_{k+1}\)), we see that the latter condition is equivalent with
$$\begin{aligned} \frac{g_{i_k}-g_j}{h_{i_k}-h_j} < \frac{g_j-g_{i_{k+1}}}{h_j-h_{i_{k+1}}}. \end{aligned}$$
(40a)
Second, we consider a species \(j\) for which \(j < i_1\). Expression (27) with \(K=L\), \(i=j\) and \(k=0\) gives
$$\begin{aligned} F_j(\widetilde{\varvec{p}}) \!=\! (h_j-h_{i_1})\,L(0) \!+\! \sum _{\ell =1}^{C-1} (h_{i_\ell }-h_{i_{\ell +1}})\, L\left( \sum _{m=1}^\ell f_{i_m} \widetilde{p}_{i_m} \right) \!+\! h_{i_C}\, L\left( \sum _{m=1}^C f_{i_m} \widetilde{p}_{i_m} \right) . \end{aligned}$$
We use \(L(0)=1\) for the first term and Eq. (29a) for the second and third term of the right-hand side. This leads to
$$\begin{aligned} F_j(\widetilde{\varvec{p}}) = h_j-h_{i_1} + g_{i_1}, \end{aligned}$$
so that the condition \(f_j F_j(\widetilde{\varvec{p}}) < 1\) is equivalent with
$$\begin{aligned} h_j-h_{i_1} < g_j - g_{i_1}. \end{aligned}$$
Using that \(h_j > h_{i_1}\) (because \(j < i_1\)), we see that the latter condition is equivalent with
$$\begin{aligned} 1 < \frac{g_j-g_{i_1}}{h_j-h_{i_1}}. \end{aligned}$$
(40b)
Third, we consider a species \(j\) for which \(i_C < j\). Expression (27) with \(K=L\), \(i=j\) and \(k=C\) gives
$$\begin{aligned} F_j(\widetilde{\varvec{p}}) = h_{j}\, L\left( \sum _{m=1}^C f_{i_m} \widetilde{p}_{i_m} \right) = h_{j}\, \frac{g_{i_C}}{h_{i_C}}, \end{aligned}$$
where we have used Eq. (29b). Hence, the condition \(f_j F_j(\widetilde{\varvec{p}}) < 1\) is equivalent with
$$\begin{aligned} h_j \frac{g_{i_C}}{h_{i_C}} < g_j. \end{aligned}$$
Using that \(h_{i_C} > h_j\) (because \(i_C < j\)), we see that the latter condition is equivalent with
$$\begin{aligned} \frac{g_{i_C}-g_j}{h_{i_C}-h_j} < \frac{g_j}{h_j}. \end{aligned}$$
(40c)
The set of conditions (40) for all \(j \in \mathcal{S }{\setminus }\mathcal{C }\) is equivalent with conditions (16). Note that Assumption 1 allows us to discard parameter combinations for which one or more of these inequalities is replaced by an equality. Hence, when one of the conditions (16) is violated, the matrix \(J_{\mathcal{S }{\setminus }\mathcal{C }}\) has an eigenvalue \(\lambda >1\) and the equilibrium \(\widetilde{\varvec{p}}\) is unstable. As a result, we have proved that conditions (16) are necessary for local stability. To prove that conditions (16) are also sufficient for local stability, it suffices to prove that the eigenvalues of \(J_\mathcal{C }\) lie inside the unit circle.
Eigenvalues of \(J_\mathcal C \)
The matrix \(J_\mathcal{C }\) is
$$\begin{aligned} J_\mathcal{C }= 1\!\!1+ \begin{pmatrix} f_{i_1}\widetilde{p}_{i_1}\,\frac{\partial {F_{i_1}}}{\partial {p_{i_1}}}(\widetilde{\varvec{p}}) &{} f_{i_1}\widetilde{p}_{i_1}\,\frac{\partial {F_{i_1}}}{\partial {p_{i_2}}}(\widetilde{\varvec{p}}) &{} \cdots &{} f_{i_1}\widetilde{p}_{i_1}\,\frac{\partial {F_{i_1}}}{\partial {p_{i_C}}}(\widetilde{\varvec{p}}) \\ f_{i_2}\widetilde{p}_{i_2}\,\frac{\partial {F_{i_2}}}{\partial {p_{i_1}}}(\widetilde{\varvec{p}}) &{} f_{i_2}\widetilde{p}_{i_2}\,\frac{\partial {F_{i_2}}}{\partial {p_{i_2}}}(\widetilde{\varvec{p}}) &{} \cdots &{} f_{i_2}\widetilde{p}_{i_2}\,\frac{\partial {F_{i_2}}}{\partial {p_{i_C}}}(\widetilde{\varvec{p}}) \\ \vdots &{} \vdots &{} &{} \vdots \\ f_{i_C}\widetilde{p}_{i_C}\,\frac{\partial {F_{i_C}}}{\partial {p_{i_1}}}(\widetilde{\varvec{p}}) &{} f_{i_C}\widetilde{p}_{i_C}\,\frac{\partial {F_{i_C}}}{\partial {p_{i_2}}}(\widetilde{\varvec{p}}) &{} \cdots &{} f_{i_C}\widetilde{p}_{i_C}\,\frac{\partial {F_{i_C}}}{\partial {p_{i_C}}}(\widetilde{\varvec{p}}) \end{pmatrix}, \end{aligned}$$
with \(1\!\!1\) the identity matrix.
We compute the partial derivatives,
$$\begin{aligned} \frac{\partial {F_i}}{\partial {p_j}}(\varvec{p}) = \sum _{n=\max (i,j)}^{S-1} (h_n-h_{n+1})\, L^{\prime }\left( \sum _{m=1}^n f_m p_m \right) f_j + h_S\, L^{\prime }\left( \sum _{m=1}^S f_m p_m \right) f_j. \end{aligned}$$
Using (27) with \(K=L^{\prime }\) and \(i = \max (i_k,i_m) = i_{\max (k,m)}\), we get
$$\begin{aligned} \frac{\partial {F_{i_k}}}{\partial {p_{i_m}}}(\widetilde{\varvec{p}}) = \sum _{\ell =\max (k,m)}^{C-1} (h_{i_\ell }-h_{i_{\ell +1}})\, L^{\prime }\left( \sum _{j=1}^\ell f_{i_j} \widetilde{p}_{i_j} \right) f_{i_m} + h_{i_C}\, L^{\prime }\left( \sum _{j=1}^C f_{i_j} \widetilde{p}_{i_j} \right) f_{i_m}. \end{aligned}$$
Note that the parameters \(h_i\) and \(f_i\) with \(i \in \mathcal{S }{\setminus }\mathcal{C }\) do not appear in these partial derivatives.
We can write the matrix \(J_\mathcal{C }\) as
$$\begin{aligned} J_\mathcal{C }= 1\!\!1- G H, \end{aligned}$$
with \(G\) a diagonal matrix,
$$\begin{aligned} G = \begin{pmatrix} \widetilde{p}_{i_1} &{} 0 &{} \cdots &{} 0 \\ 0 &{} \widetilde{p}_{i_2} &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} &{} \vdots \\ 0 &{} 0 &{} \cdots &{} \widetilde{p}_{i_C} \end{pmatrix}, \end{aligned}$$
(41)
and \(H\) a symmetric matrix,
$$\begin{aligned} H = \sum _{k=1}^{C-1} (h_{i_k}-h_{i_{k+1}}) \; \Big | L^{\prime }\left( \sum _{j=1}^k f_{i_j}\widetilde{p}_{i_j} \right) \Big | \; \varvec{v}_k \varvec{v}_k^\top + \; h_{i_C} \, \Big | L^{\prime }\left( \sum _{j=1}^C f_{i_j}\widetilde{p}_{i_j} \right) \Big | \; \varvec{v}_C \varvec{v}_C^\top ,\nonumber \\ \end{aligned}$$
(42)
where we have used that \(L^{\prime }(x) = -|L^{\prime }(x)|\) because \(L\) is decreasing. The vectors \(\varvec{v}_k\) are
$$\begin{aligned} \varvec{v}_k = \sum _{j=1}^k f_{i_j}\;\varvec{e}_j, \end{aligned}$$
(43)
where the vectors \(\varvec{e}_j\) are the standard basis vectors in \({\text{ I }\!\text{ R }}^C\).
First we establish that all eigenvalues of the matrix \(G H\) are real and positive. Then we show that all eigenvalues of the matrix \(G H\) lie inside the unit circle. As a result, all eigenvalues of \(J_\mathcal{C }= 1\!\!1- GH\) belong to the real interval \([0,1)\).
To show that all eigenvalues of the matrix \(G H\) are real and positive, we first consider the matrix \(H\),
$$\begin{aligned} H = \sum _{k=1}^C a_k\; \varvec{v}_k \varvec{v}_k^\top , \end{aligned}$$
a linear combination of symmetric rank-one matrices \(\varvec{v}_k \varvec{v}_k^\top \), with vectors \(\varvec{v}_k\) given by (43) and coefficients \(a_k\) given by (42). Because the coefficients \(a_k\) are positive, the matrix \(H\) is a sum of positive-semidefinite matrices, and therefore positive-semidefinite.
In fact, the matrix \(H\) is positive-definite. To establish this, we have to prove that \(H\) is not singular. Assume that \(H\) is singular. Then there exists a vector \(\varvec{x} \ne \varvec{0}\) for which \(H\varvec{x} = \varvec{0}\). Then also \(\varvec{x}^\top H\varvec{x} = 0\) and therefore,
$$\begin{aligned} \varvec{x}^\top H\varvec{x} = \sum _{k=1}^C a_k\,\big (\varvec{v}_k^\top \varvec{x}\big )^2 = 0. \end{aligned}$$
Because \(a_k > 0\), this implies that
$$\begin{aligned} \varvec{v}_k^\top \varvec{x} = 0 \qquad \text{ for } k=1,2,\ldots ,C. \end{aligned}$$
Because the set of vectors \(\varvec{v}_k\) span \({\text{ I }\!\text{ R }}^C\),
$$\begin{aligned} \varvec{y}^\top \varvec{x} = 0 \qquad \text{ for } \varvec{y}\in {\text{ I }\!\text{ R }}^C. \end{aligned}$$
This implies that \(\varvec{x} = \varvec{0}\), but this contradicts our assumption. We conclude that \(H\) is not singular, and thus positive-definite.
From (41) we construct the square root \(\sqrt{G}\),
$$\begin{aligned} \sqrt{G} = \begin{pmatrix} \sqrt{\widetilde{p}_{i_1}} &{} 0 &{} \cdots &{} 0 \\ 0 &{} \sqrt{\widetilde{p}_{i_2}} &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} &{} \vdots \\ 0 &{} 0 &{} \cdots &{} \sqrt{\widetilde{p}_{i_C}} \end{pmatrix}. \end{aligned}$$
Multiplying the matrix \(G H\) on the left by \(\sqrt{G}^{-1}\) and on the right by \(\sqrt{G}\), we get
$$\begin{aligned} \sqrt{G}^{-1} GH\, \sqrt{G} = \sqrt{G} \,H \sqrt{G}. \end{aligned}$$
(44)
Because the matrix \(H\) is positive-definite, the matrix \(\sqrt{G} \,H \sqrt{G}\) is also positive-definite, and therefore has positive real eigenvalues. Equation (44) expresses that the matrices \(G H\) and \(\sqrt{G} \,H \sqrt{G}\) are similar, and therefore have the same eigenvalues. As a result, all eigenvalues of the matrix \(G H\) are positive and real.
To show that all eigenvalues of the matrix \(G H\) are inside the unit circle, we first note that the matrix \(H\) has positive components. We construct an auxiliary matrix \(H_+\) with components larger than \(H\),
$$\begin{aligned} H_+ = \sum _{k=1}^{C-1} (h_{i_k}-h_{i_{k+1}}) \; \frac{L\left( \sum _{j=1}^k f_{i_j}\widetilde{p}_{i_j} \right) }{\sum _{j=1}^k f_{i_j}\widetilde{p}_{i_j}} \; \varvec{v}_k \varvec{v}_k^\top + \; h_{i_C} \, \frac{L\left( \sum _{j=1}^C f_{i_j}\widetilde{p}_{i_j} \right) }{\sum _{j=1}^C f_{i_j}\widetilde{p}_{i_j}} \; \varvec{v}_C \varvec{v}_C^\top .\nonumber \\ \end{aligned}$$
(45)
Comparing (42) and (45) and using the inequality,
$$\begin{aligned} |L^{\prime }(x)| = - L^{\prime }(x) = \frac{1-\mathrm{e}^{-x}}{x^2} - \frac{x\,\mathrm{e}^{-x}}{x^2} \le \frac{L(x)}{x} \qquad \text{ for } x>0, \end{aligned}$$
we see that
$$\begin{aligned} H \le H_+ \qquad \text{(component-wise } \text{ inequality) }. \end{aligned}$$
Because the matrix \(G\) is diagonal with positive components, both \(G H\) and \(G H_+\) are matrices with positive components, and
$$\begin{aligned} G H \le G H_+ \qquad \text{(component-wise } \text{ inequality) }. \end{aligned}$$
Hence, the spectral radius of \(G H\) is smaller than or equal to the spectral radius of \(G H_+\).
We show that the vector \(\varvec{w}\) with positive components,
$$\begin{aligned} \varvec{w} = \sum _{m=1}^C \widetilde{p}_{i_m} \varvec{e}_m, \end{aligned}$$
is an eigenvector with eigenvalue one of the matrix \(G H_+\):
$$\begin{aligned} G H_+ \varvec{w}&= \sum _{k=1}^{C-1} (h_{i_k}-h_{i_{k+1}}) \; \frac{L\left( \sum _{j=1}^k f_{i_j}\widetilde{p}_{i_j} \right) }{\sum _{j=1}^k f_{i_j}\widetilde{p}_{i_j}} \; G \varvec{v}_k \varvec{v}_k^\top \varvec{w}\\&\quad + \; h_{i_C} \, \frac{L\left( \sum _{j=1}^C f_{i_j}\widetilde{p}_{i_j} \right) }{\sum _{j=1}^C f_{i_j}\widetilde{p}_{i_j}}\; G \varvec{v}_C \varvec{v}_C^\top \varvec{w} \\&= \sum _{k=1}^{C-1} (h_{i_k}-h_{i_{k+1}}) \; L\left( {\displaystyle \sum _{j=1}^k f_{i_j}\widetilde{p}_{i_j}} \right) \; G \varvec{v}_k + \; h_{i_C} \, L\left( {\displaystyle \sum _{j=1}^C f_{i_j}\widetilde{p}_{i_j}} \right) \; G \varvec{v}_C \\&= \sum _{\ell =1}^C \left( \sum _{k=\ell }^{C-1} (h_{i_k}-h_{i_{k+1}}) \; L\left( {\displaystyle \sum _{j=1}^k f_{i_j}\widetilde{p}_{i_j}} \right) + h_{i_C} \, L\left( {\displaystyle \sum _{j=1}^C f_{i_j}\widetilde{p}_{i_j}} \right) \right) f_{i_\ell } \widetilde{p}_{i_\ell } \varvec{e}_\ell \\&= \sum _{\ell =1}^C F_{i_\ell }(\widetilde{\varvec{p}})\, f_{i_\ell } \widetilde{p}_{i_\ell } \varvec{e}_\ell \\&= \sum _{\ell =1}^C \widetilde{p}_{i_\ell } \varvec{e}_\ell \\&= \varvec{w}, \end{aligned}$$
where we have used (27) with \(K=L\) and \(i=i_\ell \) in the fourth equality and (24) in the fifth equality. The matrix \(G H_+\) has positive exponents, so we can apply Perron-Frobenius theory. Because the vector \(\varvec{w}\) has positive components and is an eigenvector of \(G H_+\), we know that the corresponding eigenvalue equals the spectral radius of \(G H_+\). Hence, the spectral radius of \(G H_+\) is equal to one, and the spectral radius of \(G H\) is smaller than or equal to one. We have shown before that the eigenvalues of \(G H\) are real and positive. We conclude that the eigenvalues of \(J_\mathcal{C }= 1\!\!1- G H\) belong to the real interval \([0,1)\). This finishes the proof that conditions (16) are necessary and sufficient conditions for local stability of the equilibrium \(\widetilde{\varvec{p}}\).
Appendix 4: Proof of Result 3
Consider a set \(\mathcal{S }\) of species with traits \((g_i,h_i)\) satisfying Assumption 1. We prove that there is one and only one subset \(\mathcal{C }\subset \mathcal{S }\) for which inequalities (12) and (16) are simultaneously satisfied.
In the main text we have given a graphical representation of the coexistence conditions (12) and the stability conditions (16). For a subset \(\mathcal{C }\subset \mathcal{S }\) we construct a broken line passing through the trait pairs \((g_i,h_i)\) of species \(i \in \mathcal{C }\). Conditions (12) impose that at each species in \(\mathcal{C }\) the slope of the broken line should decrease (note that the slope changes at a species in \(\mathcal{C }\) due to Assumption 1). Conditions (16) impose that each species in \(\mathcal{S }{\setminus } \mathcal{C }\) should lie to the right of and below the broken line (note that no species in \(\mathcal{S }{\setminus } \mathcal{C }\) lies on the broken line due to Assumption 1).
Existence
We present an explicit algorithm to construct a set \(\mathcal{C }\) satisfying conditions (12) and (16):
Species are added one by one to the set \(\mathcal{C }\) (line 5 and 9). Which species \(i^*\) is added is determined by a maximization problem (line 4 and 8) over a set of species \(\mathcal{A }\) (computed in line 2, 6 and 10). Species are added in order of increasing \(g\) (and increasing \(h\)). Note that each maximization problem has a unique solution due to Assumption 1. The algorithm is illustrated in Fig. 5 for an example of \(S=5\) species.
The slope of the broken line decreases at each species \(i\) in the set \(\mathcal{C }\) constructed in the algorithm. Indeed, an increasing slope of the broken line at species \(i\) would contradict that species \(i\) was obtained by maximizing the slope. Hence, conditions (12) are satisfied. Similarly, each species \(j\) not belonging to the set \(\mathcal{C }\) lies to the right of and below the broken line corresponding to the set \(\mathcal{C }\). Indeed, if species \(j\) would lie to the left of and above the broken line, then there would be at least one maximization problem for which the line segment to species \(j\) has a larger slope than the maximal slope. From this contradiction we conclude that conditions (16) are satisfied for the set \(\mathcal{C }\) constructed in the algorithm.
Uniqueness
We show that there is only one set \(\mathcal{C }\) for which conditions (12) and (16) are satisfied.
Suppose that there are two sets \(\mathcal{C }_a\) and \(\mathcal{C }_b\) for which conditions (12) and (16) hold. Assume \(\mathcal{C }_a\) and \(\mathcal{C }_b\) have \(C_a\) and \(C_b\) elements, respectively,
$$\begin{aligned} \mathcal{C }_a = \{ i_1,i_2,\ldots ,i_{C_a} \}&\qquad&\text{ with } \quad g_{i_1} < g_{i_2} < \cdots < g_{i_{C_a}}\\ \mathcal{C }_b = \{ j_1,j_2,\ldots ,j_{C_b} \}&\qquad&\text{ with } \quad g_{j_1} < g_{j_2} < \cdots < g_{j_{C_b}}. \end{aligned}$$
We assume without loss of generality that \(C_a \le C_b\).
We consider the broken lines corresponding to \(\mathcal{C }_a\) and \(\mathcal{C }_b\). Both broken lines start at the origin \(0\). We compare the first species \(i_1\) and \(j_1\) of the broken lines. Suppose \(g_{i_1} > g_{j_1}\). Species \(i_1\) cannot lie above the broken line of \(\mathcal{C }_b\) (because (16) holds for \(\mathcal{C }_b\)). Hence, species \(i_1\) lies on or below the broken line of \(\mathcal{C }_b\). But then species \(j_1\) lies above the segment \([0,i_1]\) which is not possible (because (16) holds for \(\mathcal{C }_a\)). Hence, \(g_{i_1} \le g_{j_1}\). By symmetry we also have \(g_{i_1} \ge g_{j_1}\), so that \(g_{i_1} = g_{j_1}\). By Assumption 1 this implies that \(i_1 = j_1\). Next, we compare the second species \(i_2\) and \(j_2\) of the broken lines. A similar argument leads to \(i_2 = j_2\). By repeating the same argument, we obtain \(i_3 = j_3\), ..., \(i_{C_a} = j_{C_a}\). Left of species \(i_{C_a} = j_{C_a}\) the broken line of \(\mathcal{C }_a\) is a half-line with slope one. Species \(j_{C_a+1}\) cannot lie above the half-line (because (16) holds for \(\mathcal{C }_a\)). Species \(j_{C_a+1}\) cannot lie below the half-line because then the segment \([j_{C_a},j_{C_a+1}]\) would have slope smaller than one. Assumption 1 excludes that \(j_{C_a+1}\) lies on the half-line. Hence, the broken line of \(\mathcal{C }_b\) left of \(i_{C_a} = j_{C_a}\) is also a half-line with slope one. We conclude that \(C_a = C_b\) and that the sets \(\mathcal{C }_a\) and \(\mathcal{C }_b\) are identical.
Finally, we note that alternative (but equivalent) constructions of the set \(\mathcal{C }\) exist. We mention a construction in terms of the convex hull. For each species \(i\) let \(D_i\) be the half-line \(h=h_i+g-g_i\), \(g\ge g_i\). Let \(D\) be the convex hull of the half-line \(h=0\), \(g\ge 0\) and the half-lines \(D_i\). The set \(\mathcal{C }\) satisfying conditions (12) and (16) is the the set of species lying on the boundary of \(D\). The species not belonging to \(\mathcal{C }\) lie in the interior of \(D\). This construction can also be used to prove existence and uniqueness of the set \(\mathcal{C }\) of coexisting species.
Appendix 5: Model with overlapping generations
We demonstrate how our analysis for the discrete-time system (6) can be extended to the continuous-time system (20). We consider the conditions for the existence of an equilibrium with a specific coexistence set, and the corresponding conditions for local stability.
Dynamical system (20) can be written as
$$\begin{aligned} \frac{\mathrm{d}p_i}{\mathrm{d}t} = m \left( F_i(\varvec{p})\, f_i p_i - p_i \right) . \end{aligned}$$
with \(m\) the mortality rate and \(F_i(\varvec{p})\) defined in (23b). The equilibrium conditions, obtained by setting the right-hand side of the latter equations equal to zero, are identical to the equilibrium conditions (24) for the discrete-time system. Hence, the conditions (12) for the existence of an equilibrium with a specific coexistence set and the explicit expressions (13) for the equilibrium occupancies also hold for the continuous-time system.
Concerning the stability conditions, note that the Jacobian matrix \(J_\text{ cont }\) (evaluated at an equilibrium) of the continuous-time system (20) is related to the Jacobian matrix \(J\) (evaluated at an equilibrium) of the discrete-time system (6) by
$$\begin{aligned} J_\text{ cont } = m \big ( J - 1\!\!1\big ). \end{aligned}$$
We have shown that conditions (16) guarantee that all eigenvalues of \(J\) belong to the real interval \([0,1)\). This implies that the eigenvalues of \(J_\text{ cont }\) belong to the real interval \([-m,0)\), so that the equilibrium of the continuous-time system is locally stable. Conversely, we have shown that if one of conditions (16) is violated, then there exists an eigenvalue \(\lambda >1\) of \(J\) (assuming genericity of parameters). This implies that there exists an eigenvalue \(m(\lambda -1)>0\) of \(J_\text{ cont }\), so that the equilibrium of the continuous-time system is unstable. This proves that conditions (16) are necessary and sufficient for local stability of an equilibrium of the continuous-time system as well.
Note on Muller-Landau (2010)
Muller-Landau (2010) conjectured that the following conditions are necessary and sufficient for the existence of an equilibrium in which all \(S\) species coexist:
$$\begin{aligned} h_1-h_2&> \frac{f_2-f_1}{f_1f_2} \end{aligned}$$
(46a)
$$\begin{aligned} \frac{f_ih_i-f_{i-1}h_{i-1}}{f_i-f_{i-1}}&> \frac{f_{i+1}h_{i+1}-f_ih_i}{f_{i+1}-f_i} + \frac{1}{f_i} \qquad \text{ for } \text{ all }\ i=2,3,\ldots ,S-1 \nonumber \\ \end{aligned}$$
(46b)
$$\begin{aligned} f_Sh_S&> f_{S-1}h_{S-1} + \frac{f_S-f_{S-1}}{f_S}. \end{aligned}$$
(46c)
We compare conditions (46) with the exact necessary and sufficient conditions, which follow from Result 1,
$$\begin{aligned} \frac{h_1-h_2}{g_1-g_2}&> 1 \end{aligned}$$
(47a)
$$\begin{aligned} \frac{h_i-h_{i+1}}{g_i-g_{i+1}}&> \frac{h_{i-1}-h_i}{g_{i-1}-g_i} \qquad \text{ for } \text{ all }\ i=2,3,\ldots ,S-1 \end{aligned}$$
(47b)
$$\begin{aligned} \frac{h_S}{g_S}&> \frac{h_{S-1}-h_S}{g_{S-1}-g_S}. \end{aligned}$$
(47c)
Inequality (46a) is equivalent with inequality (47a). Inequality (46b) with the last term in the right-hand side dropped is equivalent with inequalitiy (47b). Inequality (46c) with the last term dropped is equivalent with inequality (47c). Hence, conditions (46) are too strong.