Appendix A: The formal proof of why \(\gamma ^{LO}\) exists and is unique
The main difficulty in (3) is the matrix inversion but it can be handled in the following way. We factorize \(\varvec{\varSigma }\) using its eigendecomposition: \(\varvec{\varSigma =P\varLambda P'}\). Basic algebra then implies
$$\begin{aligned} \varvec{Q}=(\varvec{\varSigma }+\gamma \varvec{I_N})^{-1}=\varvec{P}(\varvec{\varLambda }+\gamma \varvec{I_N})^{-1}\varvec{P'}, \end{aligned}$$
(6)
where the inverse matrix is diagonal with values \((\varLambda _{i,i}+\gamma )^{-1}\) for \(i=1,\ldots ,N\). We use the standard notation \(M_{i,j}\) for the elements of the matrix \(\varvec{M}\). Further computations lead to:
$$\begin{aligned} Q_{i,j} = \sum _{k=1}^N P_{i,k} (\varLambda _{k,k}+\gamma )^{-1} P_{j,k}. \end{aligned}$$
Therefore, summing the lines of \(\varvec{Q}\), we see that the weights in (3) are proportional to
$$\begin{aligned} \sum _{j=1}^N Q_{i,j}=\sum _{j=1}^N \sum _{k=1}^N P_{i,k} (\varLambda _{k,k}+\gamma )^{-1} P_{j,k}, \quad i=1,\ldots ,N. \end{aligned}$$
Because we are interested in Long-Only portfolios, we require that
$$\begin{aligned} \sum _{j=1}^N \sum _{k=1}^N P_{i,k} (\varLambda _{k,k}+\gamma )^{-1} P_{j,k} \ge 0, \quad i=1,\ldots ,N. \end{aligned}$$
Getting rid of the denominators (the inverse parts), this is equivalent to
$$\begin{aligned} \sum _{j=1}^N \sum _{k=1}^N P_{i,k}P_{j,k} \prod _{l \ne k} (\varLambda _{l,l}+\gamma ) \ge 0, \quad i=1,\ldots ,N. \end{aligned}$$
(7)
The function on the l.h.s. of (7) is polynomial in \(\gamma \) with degree \(N-1\). The leading term (in \(\gamma ^{N-1}\)) has coefficient
$$\begin{aligned} \sum _{j=1}^N \sum _{k=1}^N P_{i,k}P_{j,k}=\sum _{j=1}^N(\varvec{PP'})_{i,j}=1, \quad i=1,\ldots ,N, \end{aligned}$$
because \(\varvec{P}\) is orthogonal. This latter expression ensures that as \(\gamma \) increases, all of the weights will progressively become positive. Some of them are already positive, even for \(\gamma =0\), because they correspond to the long positions of \(\varvec{w}_{MV}\).
Appendix B: Proof of Proposition 1
For notational ease, we will write \(\varvec{w}\) instead \(\varvec{w}_\gamma \) during the proof.
Using the eigendecomposition \(\varvec{\varSigma =P\varLambda P'}\) and the fact that \(\varvec{P'P=PP'=I_N}\),
$$\begin{aligned} \varvec{w' \varSigma w}&=\frac{\varvec{1'(\varSigma +\gamma I_N)^{-1}\varSigma (\varSigma +\gamma I_N)^{-1} 1}}{(\varvec{1'(\varSigma +\gamma I_N)^{-1}1})^2} = \frac{\varvec{1'P(\varLambda +\gamma I_N)^{-1} \varLambda (\varLambda +\gamma I_N)^{-1} P'1}}{(\varvec{1'P(\varLambda +\gamma I_N)^{-1}P'1})^2}\\&= \frac{\displaystyle \sum \nolimits _{i=1}^N\sum _{j=1}^N\sum _{k=1}^N P_{i,k}P_{j,k} \varLambda _{k,k}( \varLambda _{k,k}+\gamma )^{-2}}{\displaystyle \left( \sum \nolimits _{i=1}^N\sum \nolimits _{j=1}^N\sum \nolimits _{k=1}^N P_{i,k}P_{j,k}(\varLambda _{k,k}+\gamma )^{-1}\right) ^2} = \frac{\displaystyle \sum \nolimits _{k=1}^N \varLambda _{k,k}(\varLambda _{k,k}+\gamma )^{-2}\left( \sum \nolimits _{l=1}^N P_{l,k}\right) ^2}{\displaystyle \left( \sum \nolimits _{k=1}^N(\varLambda _{k,k}+\gamma )^{-1} \left( \sum \nolimits _{l=1}^N P_{l,k}\right) ^2\right) ^2} \end{aligned}$$
We now introduce the following quantity, which will be ubiquitous in the remainder of the proofs:
$$\begin{aligned} L_k=\left( \sum _{l=1}^N P_{l,k}\right) ^2. \end{aligned}$$
Straightforward differentiation gives
$$\begin{aligned}&\frac{\partial }{\partial \gamma }\varvec{w' \varSigma w}\nonumber \\&\quad = \frac{\displaystyle -2\sum \nolimits _{k=1}^N\frac{ \varLambda _{k,k}}{(\varLambda _{k,k}\!+\!\!\gamma )^{3}}L_k \!\times \! \sum \nolimits _{k=1}^N(\varLambda _{k,k}\!\!+\!\gamma )^{-1}L_k\!\!+\!2 \sum \nolimits _{k=1}^N\frac{ \varLambda _{k,k}}{(\varLambda _{k,k}\!\!+\!\gamma )^{2}}L_k \!\times \! \sum \nolimits _{k=1}^N(\varLambda _{k,k}\!+\!\gamma )^{-2}L_k }{\displaystyle \left( \sum \nolimits _{k=1}^N(\varLambda _{k,k}\!\!+\!\gamma )^{-1} L_k\right) ^3}. \end{aligned}$$
The denominator is always positive, therefore, the sign of \(\frac{\partial }{\partial \gamma }\varvec{w' \varSigma w}\) is equal to that of
$$\begin{aligned}&\sum _{k=1}^N \sum _{i=k+1}^NL_kL_i \left[ \frac{\varLambda _{k,k}}{( \varLambda _{k,k}+\gamma )^2} (\varLambda _{i,i}+\gamma )^{-2}-\frac{\varLambda _{k,k}}{(\varLambda _{k,k}+\gamma )^3}(\varLambda _{i,i}+\gamma )^{-1} \right. \\&\qquad \left. + \frac{\varLambda _{i,i}}{(\varLambda _{i,i}+\gamma )^2}(\varLambda _{k,k}+\gamma )^{-2} - \frac{\varLambda _{i,i}}{(\varLambda _{i,i}+\gamma )^3}(\varLambda _{k,k}+\gamma )^{-1}\right] \\&= \sum _{k=1}^N \sum _{i=k+1}^NL_kL_i (\varLambda _{i,i}+\gamma )^{-1}(\varLambda _{k,k}+\gamma )^{-1} f_\gamma (\varLambda _{i,i},\varLambda _{k,k}) \end{aligned}$$
where
$$\begin{aligned} f_{\gamma }(x,y)=(x+\gamma )^{-1}(y+\gamma )^{-1}(x+y)-x(x+\gamma )^{-2}-y(y+\gamma )^{-2}. \end{aligned}$$
Further, we have
$$\begin{aligned} \frac{\partial }{\partial x}f_{\gamma }(x,y)=\frac{2\gamma (x-y)}{(x+\gamma )^3(y+\gamma )}, \quad \frac{\partial ^2}{\partial ^2 x}f_{\gamma }(x,y)=\frac{2\gamma (\gamma -2x+3y)}{(x+\gamma )^2(y+\gamma )}, \end{aligned}$$
and consequently the mapping \(x \mapsto f_{\gamma }(x,y)\) reaches its minimum value (which is zero) at \(x=y\). Therefore, \(\frac{\partial }{\partial \gamma }\varvec{w' \varSigma w} \ge 0\). The derivative can only reach zero if all eigenvalues are equal, which is excluded because \(\varvec{\varSigma }\) is not a multiple if \(\varvec{I_N}\). The derivative of \(\varvec{w' w}\) with respect to \(\gamma \) is dealt with in the same fashionFootnote 10 except that \(f_{\gamma }(x,y)\) must be replaced by
$$\begin{aligned} g_{\gamma }(x,y)=2(x+\gamma )^{-1}(y+\gamma )^{-1}-(x+\gamma )^{-2}-(y+\gamma )^{-2}=\frac{-(x-y)^2}{(x+\gamma )^2(y+\gamma )^2}, \end{aligned}$$
which is always strictly negative for \(x,y,\gamma > 0\) and \(x \ne y\) (which occurs because \(\varvec{\varSigma }\ne \alpha \varvec{I_N}\)). Moreover, it is obvious that \(\frac{\partial }{\partial \gamma }\varvec{w' w} \) bounded for \(x,y>0\) and \(\gamma \ge 0\).
Appendix C: Proof of the lemmas
We start with Lemma 1.
Proof
For the first assertion, if we consider \(\gamma >U\), then we can write \(\varvec{\varSigma }+\gamma \varvec{I_N}=\gamma (\varvec{\varSigma }/\gamma + \varvec{I_N})\). By the definition of \(U\), the largest eigenvalue of \(\varvec{\varSigma }/\gamma \) is strictly smaller than one and hence, the Neumann series of the inverse is given by
$$\begin{aligned} (\varvec{\varSigma }+\gamma \varvec{I_N})^{-1}&=\gamma ^{-1} \sum _{k=0}^\infty (-\varvec{\varSigma }/\gamma )^k \\&=\gamma ^{-1} \sum _{k=0}^\infty \left[ (\varvec{\varSigma }/\gamma )^{2k}-(\varvec{\varSigma }/\gamma )^{2k+1} \right] \end{aligned}$$
We now write \(s_{i,j}^{(2k)}\) for the elements of \((\varvec{\varSigma }/\gamma )^{2k}\). The \(i\)th row sum of \((\varvec{\varSigma }/\gamma )^{2k+1}\) is then given by
$$\begin{aligned} \sum _{j=1}^N s_{i,j}^{(2k+1)}&=\sum _{j=1}^N \sum _{l=1}^N s_{i,l}^{(2k)} \times \varSigma _{l,j}/\gamma \\&= \sum _{l=1}^N s_{i,l}^{(2k)} \sum _{j=1}^N \varSigma _{l,j}/\gamma \\&\le \sum _{l=1}^N s_{i,l}^{(2k)}. \end{aligned}$$
In the last inequality, we have used the definition of \(U\) (as well as the symmetry of \(\varvec{\varSigma }\)) and the fact that by the assumption of the lemma, the row sums of even powers of \(\varvec{\varSigma }\) are positive. The inequality shows that all of the row sums of \((\varvec{\varSigma }+\gamma \varvec{I_N})^{-1}\) will be positive, thereby fulfilling the condition in the definition of \(\gamma ^{LO}\) in (4).
We now turn to Lemma 2.
Proof
We use the notations of “Appendix B”, and by replacing \(\varvec{\varSigma }\) by \(\varvec{I_N}\) in the computation of the ex-ante variance, we get
$$\begin{aligned} D(\varvec{w}_\gamma )=\frac{\displaystyle \sum \nolimits _{k=1}^N (\varLambda _{k,k}+\gamma )^{-2}L_k}{\displaystyle \left( \sum \nolimits _{k=1}^N(\varLambda _{k,k}+\gamma )^{-1} L_k\right) ^2}. \end{aligned}$$
First, we have
$$\begin{aligned} \displaystyle \sum _{k=1}^N (\varLambda _{k,k}+\delta \varLambda _{1,1})^{-2}L_k \le (\delta \varLambda _{1,1})^{-2}\sum _{k=1}^N L_k, \end{aligned}$$
while at the same time
$$\begin{aligned} \displaystyle \left( \sum _{k=1}^N(\varLambda _{k,k}+\delta \varLambda _{1,1} )^{-1} L_k\right) ^2 \ge \displaystyle \left( \sum _{k=1}^N((1+\delta )\varLambda _{1,1})^{-1} L_k\right) ^2. \end{aligned}$$
Therefore,
$$\begin{aligned} D(\varvec{w}_\gamma )^{-1} \ge \frac{\delta ^2}{(\delta +1)^2}\sum _{k=1}^N L_k=\frac{\delta ^2}{(\delta +1)^2}N, \end{aligned}$$
where the equality stems from the definition of \(L_k\) and the fact that \(\varvec{P}\) is orthogonal (the sum of all elements of \(\varvec{PP'}\) is equal to \(N\)).
Appendix D: Proof of Proposition 2
We start by looking at the \(L^2\)-norm difference of the two portfolios (3), with common \(\gamma \), but different \(\varvec{\varSigma }^{(i)}\). With the notations of the former proof:
$$\begin{aligned} \epsilon&=|(\varvec{w}_\gamma ^{(1)})'\varvec{w}_\gamma ^{(1)}-(\varvec{w}_\gamma ^{(2)})'\varvec{w}_\gamma ^{(2)}| \nonumber \\&=\left| \frac{a(\varLambda ^{(1)},\gamma )}{b(\varLambda ^{(1)},\gamma )^2}-\frac{a(\varLambda ^{(2)},\gamma )}{b(\varLambda ^{(2)},\gamma )^2} \right| , \nonumber \\&=\frac{a(\varLambda ^{(1)},\gamma )[b(\varLambda ^{(2)},\gamma )^2\!-\!b(\varLambda ^{(1)},\gamma )^2]+b(\varLambda ^{(1)},\gamma )^2[a(\varLambda ^{(1)},\gamma )-a(\varLambda ^{(2)},\gamma )]}{b(\varLambda ^{(1)},\gamma )^2b(\varLambda ^{(2)},\gamma )^2}, \end{aligned}$$
(8)
with
$$\begin{aligned} a(\varLambda ^{(i)},\gamma )=\sum _{k=1}^N (\varLambda ^{(i)}_{k,k}+\gamma )^{-2}L_k, \quad \quad b(\varLambda ^{(i)},\gamma )=\sum _{k=1}^N (\varLambda ^{(i)}_{k,k}+\gamma )^{-1}L_k. \end{aligned}$$
(9)
Basic computations further yield
$$\begin{aligned} |a(\varLambda ^{1},\gamma )-a(\varLambda ^{2},\gamma )|&= \sum _{k=1}^N \frac{(\varLambda ^{(2)}_{k,k}+\gamma )^2-(\varLambda ^{(1)}_{k,k}+\gamma )^2}{(\varLambda ^{(1)}_{k,k}+\gamma )^2(\varLambda ^{(2)}_{k,k}+\gamma )^2} L_k \nonumber \\&= \sum _{k=1}^N \frac{(\varLambda ^{(2)}_{k,k}-\varLambda ^{(1)}_{k,k})((\varLambda ^{(2)}_{k,k}+\varLambda ^{(1)}_{k,k}+2\gamma )}{(\varLambda ^{(1)}_{k,k}+\gamma )^2(\varLambda ^{(2)}_{k,k}+\gamma )^2} L_k \nonumber \\&\le \sum _{k=1}^N \frac{2|\varLambda ^{(2)}_{k,k}-\varLambda ^{(1)}_{k,k}|}{\left( \varLambda ^{(1)}_{k,k}+\gamma \right) \left( \varLambda ^{(2)}_{k,k}+\gamma \right) \left( \min \left( \varLambda ^{(1)}_{k,k},\varLambda ^{(2)}_{k,k}\right) +\gamma \right) } L_k \nonumber \\&\le \frac{2\eta N}{\left( \varLambda ^{(1)}_{N,N}+\gamma \right) \left( \varLambda ^{(2)}_{N,N}+\gamma \right) \left( \min \left( \varLambda ^{(1)}_{N,N},\varLambda ^{(2)}_{N,N}\right) +\gamma \right) } \nonumber \\&:= \kappa \end{aligned}$$
(10)
where the first inequality stems from the fact that \(x+y+2z \le 2(\max (x,y)+z)\) for \(x,y,z \ge 0\). In the second inequality, we have set \(\eta = {\text {max}}_{i=1,\ldots ,N} \left| \varLambda ^{(2)}_{i,i}-\varLambda ^{(1)}_{i,i}\right| \) and used the identity \(\sum _{k=1}^N L_k=N\).
Moreover,
$$\begin{aligned}&|b(\varLambda ^{(1)},\gamma )^2-b(\varLambda ^{(2)},\gamma )^2|\nonumber \\&\quad =(b(\varLambda ^{(1)},\gamma )+b(\varLambda ^{(2)},\gamma ))|b(\varLambda ^{(1)},\gamma )-b(\varLambda ^{(2)},\gamma )| \nonumber \\&\quad = \left( \sum _{k=1}^N \frac{L_k}{\varLambda ^{(1)}_{k,k}+\gamma }+\sum _{k=1}^N \frac{L_k}{\varLambda ^{(2)}_{k,k}+\gamma }\right) \left| \sum _{k=1}^N \frac{L_k}{\varLambda ^{(1)}_{k,k}+\gamma }-\sum _{k=1}^N \frac{L_k}{\varLambda ^{(2)}_{k,k}+\gamma }\right| \nonumber \\&\quad =\left( \sum _{k=1}^N\frac{\varLambda ^{(1)}_{k,k}+\varLambda ^{(2)}_{k,k}+2\gamma }{\left( \varLambda ^{(1)}_{N,N}+\gamma \right) \left( \varLambda ^{(2)}_{N,N}+\gamma \right) }L_k \right) \left| \sum _{k=1}^N\frac{\varLambda ^{(2)}_{k,k}-\varLambda ^{(1)}_{k,k}}{\left( \varLambda ^{(1)}_{N,N}+\gamma \right) \left( \varLambda ^{(2)}_{N,N}+\gamma \right) }L_k \right| \nonumber \\&\quad \le \frac{2N}{\min \left( \varLambda ^{(1)}_{N,N},\varLambda ^{(2)}_{N,N}\right) +\gamma } \!\times \! \frac{\eta N}{\left( \varLambda ^{(1)}_{N,N}\!+\!\gamma \right) \left( \varLambda ^{(2)}_{N,N}\!+\!\gamma \right) } \nonumber \\&\quad = \kappa N \end{aligned}$$
(11)
Combining (8), (10) and (11), we get
$$\begin{aligned} \epsilon \le \frac{a(\varLambda ^{(1)},\gamma )\kappa N+b(\varLambda ^{(1)},\gamma )^2 \kappa }{b(\varLambda ^{(1)},\gamma )^2b(\varLambda ^{(2)},\gamma )^2}. \end{aligned}$$
Lastly, the simple bounds
$$\begin{aligned} \frac{N}{(\varLambda ^{(i)}_{1,1}+\gamma )^2} \le a(\varLambda ^{(i)},\gamma )\le \frac{N}{(\varLambda ^{(i)}_{N,N}+\gamma )^2} \end{aligned}$$
and
$$\begin{aligned} \frac{N}{\varLambda ^{(i)}_{1,1}+\gamma } \le b(\varLambda ^{(i)},\gamma )\le \frac{N}{\varLambda ^{(i)}_{N,N}+\gamma }, \end{aligned}$$
(12)
lead to
$$\begin{aligned} \epsilon \le \frac{2 \kappa }{N^2} \frac{(\varLambda ^{(1)}_{1,1}+\gamma )^2(\varLambda ^{(2)}_{1,1}+\gamma ) ^2}{(\varLambda ^{(1)}_{N,N}+\gamma )^2} :=c \eta /N, \end{aligned}$$
(13)
for some constant \(c>0\) (which is all the more large that the eigenvalues are dispersed).
We now consider the functions \(h_{i}(\gamma )=(\varvec{w}^{(i)}_\gamma )'\varvec{w}^{(i)}_\gamma \) for \(i=1,2\). Recall from the proof of Proposition 1, that the derivatives \(h_i'\) are strictly negative. Therefore, the inverse functions \(h^{-1}_i\) are also strictly decreasing and Lipschitz continuous on the interval \((1/N+\varepsilon ,h_i(0))\) for any strictly positive \(\varepsilon \), as is depicted in the right side of Fig. 2.
Since \(\varvec{w}^{(i)}_{\gamma _i}\) both satisfy (2), then
$$\begin{aligned} 0&=(\varvec{w}^{(1)}_{\gamma ^{(1)}})'\varvec{w}^{(1)}_{\gamma ^{(1)}}-(\varvec{w}^{(2)}_{\gamma ^{(2)}})'\varvec{w}^{(2)}_{\gamma ^{(2)}}\\&=\left[ (\varvec{w}^{(1)}_{\gamma ^{(1)}})'\varvec{w}^{(1)}_{\gamma ^{(1)}}-(\varvec{w}^{(1)}_{\gamma ^{(2)}})'\varvec{w}^{(1)}_{\gamma ^{(2)}}\right] +\left[ (\varvec{w}^{(1)}_{\gamma ^{(2)}})'\varvec{w}^{(1)}_{\gamma ^{(2)}}-(\varvec{w}^{(2)}_{\gamma ^{(2)}})'\varvec{w}^{(2)}_{\gamma ^{(2)}}\right] \end{aligned}$$
Now, because the \(h^{-1}_i\) are Lipschitz on \((\delta ,h_i(0))\), the first bracket satisfies
$$\begin{aligned} |\gamma ^{(1)}-\gamma ^{(2)}| \le c \left| \varvec{w}^{(1)}_{\gamma ^{(1)}})'\varvec{w}^{(1)}_{\gamma ^{(1)}}-(\varvec{w}^{(1)}_{\gamma ^{(2)}})'\varvec{w}^{(1)}_{\gamma ^{(2)}} \right| , \end{aligned}$$
while the second, as we have shown in (13) is \(O(\eta /N)\), therefore it must also hold that
$$\begin{aligned} |\gamma ^{(1)}-\gamma ^{(2)}| \le C \eta /N, \end{aligned}$$
for some other constant \(C\) which depends on the maximum absolute values of \((h_i^{-1})'\) on the interval \([\delta ,h_i(0)]\) - note that it is not a priori obvious (or true) that the \(h_i\) are convex. Overall, we therefore expect \(|\gamma ^{(1)}-\gamma ^{(2)}|\) to be smaller for larger values of \(\delta \). When the norm constraint is loose, the \(\gamma ^{(i)}\) will usually be very small and so will their difference.
Appendix E: Proof of Theorem 1
Simplifying the notations of the preceding proof, we will henceforth write \(b_i= b(\varLambda ^{(i)},\gamma ^{(i)})\) (defined in (9)).
$$\begin{aligned} \epsilon&= (\varvec{w}^{(1)}-\varvec{w}^{(2)})'(\varvec{w}^{(1)}-\varvec{w}^{(2)}) \nonumber \\&=(\varvec{w}^{(1)})'\varvec{w}^{(1)}+(\varvec{w}^{(2)})'\varvec{w}^{(2)}-2(\varvec{w}^{(1)})'\varvec{w}^{(2)} \nonumber \\&= \sum _{k=1}^N X_k \times L_k \end{aligned}$$
(14)
where
$$\begin{aligned} X_k =&\frac{1}{\left( \varLambda _{k,k}^{(1)}\!+\!\gamma ^{(1)}\right) ^2b_1^2} \!+\! \frac{1}{\left( \varLambda _{k,k}^{(2)}+\gamma ^{(2)}\right) ^2b_2^2}-2\frac{1}{\left( \varLambda _{k,k}^{(1)}\!+\!\gamma ^{(1)}\right) \left( \varLambda _{k,k}^{(2)}\!+\!\gamma ^{(2)}\right) b_1b_2} \nonumber \\ =&\frac{\left( \varLambda _{k,k}^{(1)}+\gamma ^{(1)}\right) ^2b_1^2\!+\!\left( \varLambda _{k,k}^{(2)}+\gamma ^{(2)}\right) ^2b_2^2\!-\!2\left( \varLambda _{k,k}^{(1)}+\gamma ^{(1)}\right) \left( \varLambda _{k,k}^{(2)}+\gamma ^{(2)}\right) b_1b_2}{\left( \varLambda _{k,k}^{(1)}+\gamma ^{(1)}\right) ^2\left( \varLambda _{k,k}^{(2)}+\gamma ^{(2)}\right) ^2b_1^2b_2^2} \nonumber \\ =&\frac{\left( \left( \varLambda _{k,k}^{(1)}+\gamma ^{(1)}\right) b_1-\left( \varLambda _{k,k}^{(2)}+\gamma ^{(2)}\right) b_2\right) ^2}{\left( \varLambda _{k,k}^{(1)}+\gamma ^{(1)}\right) ^2\left( \varLambda _{k,k}^{(2)}+\gamma ^{(2)}\right) ^2b_1^2b_2^2} \nonumber \\ =&\frac{\left( b_1(\varLambda _{k,k}^{(1)}-\varLambda _{k,k}^{(2)})+b_1(\gamma ^{(1)}-\gamma ^{(2)})+ \varLambda _{k,k}^{(2)}(b_1-b_2)+\gamma ^{(2)}(b_1-b_2)\right) ^2 }{\left( \varLambda _{k,k}^{(1)}+\gamma ^{(1)}\right) ^2\left( \varLambda _{k,k}^{(2)}+\gamma ^{(2)}\right) ^2b_1^2b_2^2} \end{aligned}$$
(15)
The denominator of \(X_k\) is bounded from below by
$$\begin{aligned} d=N^4\frac{\left( \varLambda _{N,N}^{(1)}+\gamma ^{(1)}\right) ^2\left( \varLambda _{N,N}^{(2)}+\gamma ^{(2)}\right) ^2}{\left( \varLambda _{1,1}^{(1)}+\gamma ^{(1)}\right) ^2\left( \varLambda _{1,1}^{(2)}+\gamma ^{(2)}\right) ^2}, \end{aligned}$$
and this constant, given Lemma 2, can be increased so as to be unrelated to the \(\gamma ^{(i)}\).
Moreover, if we recall that \(\eta = {\text {max}}_{i=1,\ldots ,N} \left| \varLambda ^{(2)}_{i,i}-\varLambda ^{(1)}_{i,i}\right| \), the following bound also holds
$$\begin{aligned} |b_1-b_2|\!\le \!\sum _{k=1}^N\frac{|\varLambda ^{(2)}_{k,k}\!-\!\varLambda ^{(1)}_{k,k}|}{\left( \varLambda ^{(1)}_{k,k}\!+\!\gamma ^{(1)}\right) \left( \varLambda ^{(2)}_{k,k}\!+\!\gamma ^{(2)} \right) }L_k\!\le \! \frac{\eta N}{\left( \varLambda ^{(1)}_{N,N}+\gamma ^{(1)}\right) \left( \varLambda ^{(2)}_{N,N}+\gamma ^{(2)} \right) }. \nonumber \\ \end{aligned}$$
(16)
We now reconnect the pieces. There are 4 terms in the squared numerator in (15):
-
the first one is bounded by a multiple of \(\eta N\)
-
the second one, by Proposition 2 and (12) is \(O(\eta )\)
-
the third one, by (16) is \(O(\eta N)\) like the first one
-
lastly, by (16) and Lemma 2, the fourth one is also \(O(\eta N)\)
Consequently, going back to (14) and plugging \(\sum _{k=1}^N L_k=N\), we get
$$\begin{aligned} \epsilon = (\varvec{w}^{(1)}-\varvec{w}^{(2)})'(\varvec{w}^{(1)}-\varvec{w}^{(2)})=O(\eta ^2/N). \end{aligned}$$
Remark Instead of looking at the parameters \(\eta \) and \(N\), one may wonder what is the impact of \(\delta \) on \(\epsilon \). It can essentially be measured through the magnitude of the \(\gamma ^{(i)}\). First, by the last remark of the proof of Proposition 2, we expect \(|\gamma ^{(1)}-\gamma ^{(2)}|\) to be a decreasing function of \(\delta \). But this term in (15) is associated with the only term which does not increase with \(N\). Moreover, the terms in \(b_i\) and \(|b_1-b_2|\) are all bounded above and below by functions which decrease \(\gamma ^{(i)}\) and hence increase with \(\delta \) (by Proposition 1). In the end, it is very likely that \((\varvec{w}^{(1)}-\varvec{w}^{(2)})'(\varvec{w}^{(1)}-\varvec{w}^{(2)})\) will be most of the time an increasing function of \(\delta \). This seems logical because as \(\delta \) decreases to \(1/N\), both \(\varvec{w}^{(1)}\) and \(\varvec{w}^{(2)}\) will converge to the equally weighted portfolio. When the constraint is tight, there is not much room for a large differences between \(\varvec{w}^{(1)}\) and \(\varvec{w}^{(2)}\), even if \(\varvec{\varSigma }^{(1)}\) and \(\varvec{\varSigma }^{(2)}\) are very different.