Appendix: Proofs
The asymptotically unbiased estimator of the extreme value index is based on the sample moments
$$\begin{aligned} M_{k}^{(\alpha)}:=\frac{1}{k}\sum_{i=1}^{k}(\log X_{n-i+1,n}-\log X_{n-k,n})^{\alpha} \end{aligned}$$
defined in Sect. 4.1. One can write these statistics as functionals of the tail quantile process \((Q_{n}(t):=X_{n-[kt],n})_{t\in [0,1]}\) via
$$\begin{aligned} M_{k}^{(\alpha)}=\int_{0}^{1}\left(\log\frac{Q_{n}(t)}{Q_{n}(1)}\right )^{\alpha}dt\;. \end{aligned}$$
Therefore, to derive the asymptotic properties of the asymptotically unbiased estimator, we first establish those of the tail quantile process and the moments. We first show that the tail quantile process can be approximated by a Gaussian process as in the following result.
Proposition A.1
Suppose that
\((X_{1}, X_{2}, \ldots)\)
is a stationary
\(\beta\)-mixing time series with continuous common marginal distribution function
\(F\). Assume that
\(F\)
satisfies the third order condition (2.6) with parameters
\(\gamma>0\), \(\rho<0\)
and
\(\rho^{\prime}\leq0\). Suppose that an intermediate sequence
\(k\)
satisfies, as
\(n\to\infty\), that
\(k\to\infty\), \(k/n\to0\)
and
\(\sqrt{k}A(n/k)B(n/k)=O(1)\). In addition, assume that the regularity conditions (a)–(c) hold. Then, for a given
\(\varepsilon>0\), under a Skorohod construction, there exist two functions
\(\tilde{A}\sim A\)
and
\(\tilde{B}=O(B)\), where
\(A\)
and
\(B\)
are the second and third order scale functions in (2.6), and a centered Gaussian process
\((e(t))_{t\in[0,1]}\)
with covariance function
\(r\)
defined as in the regularity condition (b) such that, as
\(n\to\infty\),
$$\begin{aligned} \sup_{t\in(0,1]} t^{1/2+\varepsilon} & \bigg|\sqrt{k}\left(\log \frac{Q_{n}(t)}{ U(n/k)}+\gamma\log t\right)-\gamma t^{-1}e(t)\\ &\;{}-\sqrt{k}\tilde{A}(n/k)\frac{t^{-\rho}-1}{\rho}-\sqrt{k}\tilde {A}(n/k)\tilde{B}(n/k)\frac{t^{-\rho-\rho^{\prime}}-1}{\rho+\rho ^{\prime}}\bigg| \longrightarrow0\quad \textit{a.s.} \end{aligned}$$
Proof
By writing \(X_{i}=U(Y_{i})\) where each \(Y_{i}\) follows a standard Pareto distribution, we obtain that \((Y_{1},Y_{2},\dots)\) is a stationary \(\beta \)-mixing series satisfying the regularity conditions. This is a direct consequence of \(Y_{i}=1/(1-F(X_{i}))\). We write \(Q_{n}(t)=X_{n-[kt],n}=U(Y_{n-[kt],n})\) and focus first on the asymptotic properties of the process \((Y_{n-[kt],n})_{t\in[0,1]}\). By verifying the conditions in Drees [7, Theorem 2.1], we get that under a Skorohod construction, there exists a centered Gaussian process \((e(t))_{t\in[0,1]}\) with covariance function \(r\) defined in the regularity condition (b) such that for \(\varepsilon>0\), as \(n\to\infty\),
$$ \sup_{t\in(0,1]}t^{1/2+\varepsilon}\left|\sqrt{k}\left(t\frac {Y_{n-[k t],n}}{n/k}-1\right)-t^{-1}e(t)\right|\longrightarrow 0\quad \mbox{a.s.} $$
(A.1)
Next, we present an inequality on the function \(U\) based on the third order condition (2.6). Under that condition, there exist two functions \(\tilde{A}\sim A\) and \(\tilde{B}=O(B)\) such that for any \(\delta>0\), there exists some positive number \(u_{0}(\delta)\) such that for all \(u\geq u_{0}\) and \(ux\geq u_{0}\),
$$ \Bigg|\frac{\frac{\log U(ux)-\log U(u) -\gamma\log x}{\tilde {A}(u)}-\frac{x^{\rho}-1}{\rho}}{\tilde{B}(u)}-\frac{x^{\rho+\rho ^{\prime}}-1}{\rho+\rho^{\prime}}\Bigg|\leq\delta x^{\rho+\rho ^{\prime}} \max(x^{\delta},x^{-\delta})\;. $$
(A.2)
This inequality is a direct consequence of applying de Haan and Ferreira [13, Theorem B.3.10] to the function \(f(u)=\log U(u)-\gamma\log u\).
We combine the asymptotic property of \((Y_{n-[kt],n})_{t\in[0,1]}\) in (A.1) with the inequality (A.2) as follows. Taking \(u=n/k\) and \(ux=Y_{n-[kt],n}\) in (A.2), we get that given any \(0<\delta<-\rho-\rho^{\prime}\), for sufficiently large \(n>n_{0}(\delta )\), with probability 1,
$$\begin{aligned} &\bigg|\log Q_{n}(t) -\log U(n/k)-\gamma\log\left(\frac{k}{n} Y_{n-[kt],n}\right)- \tilde{A}(n/k) \frac{(\frac {k}{n}Y_{n-[kt],n})^{\rho}-1}{\rho} \\ &{}-\tilde{A}(n/k) \tilde{B}(n/k) \frac{(\frac {k}{n}Y_{n-[kt],n})^{\rho+\rho^{\prime}}-1}{\rho+\rho^{\prime}}\bigg| \\ &\leq\delta\tilde{A}(n/k) \tilde{B}(n/k) \left(\frac {k}{n}Y_{n-[kt],n}\right)^{\rho+\rho^{\prime}+\delta}. \end{aligned}$$
(A.3)
By applying (A.1), we bound the four terms in (A.3) that contain \(\frac{k_{n}}{n}Y_{n-[k_{n}t],n}\) to get
$$\begin{aligned} &t^{1/2+\varepsilon}\left\vert\sqrt{k}\left(\log\Big(\frac {k}{n}Y_{n-[kt],n}\Big)+\log t\right) -t^{-1}e(t)\right\vert \longrightarrow0\quad \mbox{a.s.},\\ &t^{1/2+\varepsilon}\bigg\vert \sqrt{k}\bigg(\frac{(\frac {k}{n}Y_{n-[kt],n})^{\rho}-1}{\rho}-\frac{t^{-\rho}-1}{\rho}\bigg)-t^{-\rho-1}e(t)\bigg\vert =o(t^{-\rho})\\ &\phantom{t^{1/2+\varepsilon}\bigg\vert \sqrt{k}\bigg(\frac {(\frac{k}{n}Y_{n-[kt],n})^{\rho}-1}{\rho}-\frac{t^{-\rho}-1}{\rho }\bigg)-t^{-\rho-1}e(t)\bigg\vert }\longrightarrow0\quad \mbox{a.s.},\\ &t^{1/2+\varepsilon}\bigg\vert \sqrt{k}\bigg(\Big(\frac {k}{n}Y_{n-[kt],n}\Big)^{\rho+\rho^{\prime}}-t^{-\rho-\rho^{\prime}}\bigg)-(\rho+\rho^{\prime})\big(t^{-\rho-\rho^{\prime}-1}e(t)\big)\bigg\vert \\ &=o\big(t^{-\rho-\rho^{\prime}}\big)\longrightarrow0 \quad \mbox{a.s.},\\ &t^{1/2+\varepsilon}\left(\frac{k}{n}Y_{n-[kt],n}\right)^{\rho +\rho^{\prime}+\delta}=O(t^{1/2-\rho-\rho^{\prime}+\varepsilon -\delta})=O(1)\quad \mbox{a.s.} \end{aligned}$$
When taking \(n\to\infty\), with the facts that \(\sup_{t\in (0,1]}t^{1/2+\varepsilon}t^{-1}\left\vert e(t)\right\vert =O(1) \ \mbox{a.s.}\), \(\sqrt{k}\tilde{A}(n/k)\tilde{B}(n/k)=O(1)\) and \(\tilde{A}(n/k), \tilde{B}(n/k)\to0\), the proposition is proved due to the arbitrary choice of \(\delta\). □
By applying Proposition A.1, we get the asymptotic properties of the moments \(M_{k}^{(\alpha)}\) as follows.
Corollary A.2
Assume that the conditions in Proposition
A.1
hold. Then under the same Skorohod construction as in Proposition
A.1, as
\(n\to\infty\),
$$\begin{aligned} &\sqrt{k}\big(M_{k}^{(\alpha)} -\gamma^{\alpha}\Gamma(\alpha +1)\big)-\alpha\gamma^{\alpha}P_{1}^{(\alpha)} -\sqrt{k}\tilde {A}(n/k)\gamma^{\alpha-1}\frac{\Gamma(\alpha+1)}{\rho} \bigg(\frac{1}{(1-\rho)^{\alpha}}-1\bigg)\\ &{}-\sqrt{k}\tilde{A}(n/k)\tilde{B}(n/k)\gamma^{\alpha-1}\frac {\Gamma(\alpha+1)}{\rho+\rho^{\prime}}\left(\frac{1}{(1-\rho-\rho ^{\prime})^{\alpha}}-1\right)\\ &{}-\sqrt{k}\tilde{A}(n/k)^{2}\gamma^{\alpha-2} \frac{\Gamma(\alpha +1)}{2\rho^{2}}\left(\frac{1}{(1-2\rho)^{\alpha}}-\frac{2}{(1-\rho )^{\alpha}}+1\right)\longrightarrow0\quad \textit{a.s.}, \end{aligned}$$
where the
\(P_{1}^{(\alpha)}\)
are normally distributed random variables with mean zero. In addition,
$$\begin{aligned} {\mathrm{Cov}}\big(P_{1}^{(\alpha)},P_{1}^{(\tilde{\alpha })}\big)=\iint_{[0,1]^{2}} &(-\log s)^{\alpha-1}(-\log t)^{\tilde{\alpha}-1} \\ &\times\bigg(\frac{r(s,t)}{st}-\frac{r(s,1)}{s}-\frac {r(1,t)}{t}+r(1,1) \bigg)ds \,dt, \end{aligned}$$
with the covariance function
\(r\)
defined as in the regularity condition (b).
Proof of Corollary A.2
Recall that
$$\begin{aligned} M_{k}^{(\alpha)}=\int_{0}^{1} \left(\log\frac{Q_{n}(t)}{U(n/k)}-\log \frac{Q_{n}(1)}{U(n/k)}\right)^{\alpha}dt\;. \end{aligned}$$
Under the same Skorohod construction as in Proposition A.1, we get that as \(n\to\infty\),
$$\begin{aligned} \sup_{t\in(0,1]} t^{1/2+\varepsilon} \bigg|&\sqrt{k}\bigg(\log \frac{Q_{n}(t)}{ Q_{n}(1)} -\gamma(-\log t)\bigg)-\gamma\big(t^{-1}e(t)-e(1)\big)\\ &-\sqrt{k}\tilde{A}(n/k)\frac{t^{-\rho}-1}{\rho} -\sqrt{k}\tilde{A}(n/k)\tilde{B}(n/k)\frac{t^{-\rho-\rho^{\prime}}-1}{\rho+\rho^{\prime}}\bigg| \longrightarrow0\quad \mbox{a.s.} \end{aligned}$$
The second order expansion \((1+x)^{\alpha}=1+\alpha x+\frac{\alpha (\alpha-1)}{2}x^{2}+o(x^{2})\) yields that as \(n\to\infty\),
$$\begin{aligned} \sup_{t\in(0,1]} t^{1/2+\varepsilon} \bigg|&\sqrt{k}\bigg(\Big(\log\frac{Q_{n}(t)}{ Q_{n}(1)}\Big)^{\alpha}-\gamma^{\alpha}(-\log t)^{\alpha}\bigg)\\ &-\alpha\gamma^{\alpha}(-\log t)^{\alpha-1} \big(t^{-1}e(t)-e(1)\big)\\ &-\sqrt{k}\tilde{A}(n/k)\alpha\gamma^{\alpha-1}(-\log t)^{\alpha -1}\frac{t^{-\rho}-1}{\rho}\\ &-\sqrt{k}\tilde{A}(n/k)\tilde{B}(n/k)\alpha\gamma^{\alpha -1}(-\log t)^{\alpha-1}\frac{t^{-\rho-\rho^{\prime}}-1}{\rho+\rho ^{\prime}}\\ &-\sqrt{k}\tilde{A}^{2}(n/k)\frac{\alpha(\alpha-1)}{2}\gamma ^{\alpha-2}(-\log t)^{\alpha-2}\left(\frac{t^{-\rho}-1}{\rho }\right)^{2}\bigg| \longrightarrow0\quad \mbox{a.s.} \end{aligned}$$
Some terms are omitted because we have \(\sup_{t\in (0,1]}t^{1/2+\varepsilon}t^{-1}\left\vert e(t)\right\vert=O(1)\; \mbox{a.s.}\) and \(\tilde{A}(n/k)\to0\) as \(n\to\infty\).
By taking \(\varepsilon<1/2\), we can then take the integral of \((\log \frac{Q_{n}(t)}{ Q_{n}(1)})^{\alpha}\) on \((0,1]\) and use the fact that \(\int_{0}^{1}(-\log t)^{a-1} t^{-b} dt=\frac{\Gamma(a)}{(1-b)^{a}}\) for \(b<1\) to obtain the result in the corollary. The random term is \(P_{1}^{(\alpha)}=\int_{0}^{1}(-\log t)^{\alpha-1} (t^{-1}e(t)-e(1))\,dt\). The covariance can be calculated from there. □
Next, we handle the estimator of the second order parameter \(\rho\). The estimator of \(\rho\) is based on a different sequence \((k_{\rho})\) satisfying (2.7). Because \((k_{\rho})\) satisfies the condition in Proposition A.1, we get the asymptotic properties of the moments \(M_{k_{\rho}}^{(\alpha)}\) as in Corollary A.2. Then, following the same lines as in the proof of Gomes et al. [11, Theorem 2.2], we get the following result.
Proposition A.3
Suppose that
\((X_{1}, X_{2}, \ldots)\)
is a stationary
\(\beta\)-mixing time series with continuous common marginal distribution function
\(F\). Assume that
\(F\)
satisfies the third order condition (2.6) with parameters
\(\gamma>0\), \(\rho<0, \rho^{\prime}\leq0\). Suppose that an intermediate sequence
\((k_{\rho})\)
satisfies (2.7). In addition, assume that the regularity conditions hold. Then for the
\(\rho\)-estimator defined in (4.1) and as
\(n\to\infty\),
$$\begin{aligned} \sqrt{k_{\rho}}\tilde{A}(n/k_{\rho})\big(\hat{\rho}_{k_{\rho}}^{(\alpha)}-\rho\big) \end{aligned}$$
is asymptotically normally distributed.
We remark that analogously to the result in Theorem 2.1 in Gomes et al. [11], the consistency of the \(\rho\)-estimator for \(\beta\)-mixing time series can be proved under only the second order condition (2.3) and weaker conditions on \((k_{\rho})\).
Finally, we can use the tools built in Corollary A.2 and Proposition A.3 to prove our main results.
Proof of Theorem 4.1
From Corollary A.2, with \((k_{n})\) satisfying (2.8), under the same Skorohod construction as in Proposition A.1, the Hill estimator has the expansion
$$\begin{aligned} \sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}-\gamma\right)-\gamma P_{1}^{(1)}-\sqrt{k_{n}}\tilde{A}(n/k_{n})\frac{1}{1-\rho}\longrightarrow 0\quad \mbox{a.s.}, \end{aligned}$$
which leads to
$$\begin{aligned} \sqrt{k_{n}}\big(\hat{\gamma}_{k_{n}}^{2}-\gamma^{2}\big) - 2 \gamma ^{2}P_{1}^{(1)} - \sqrt{k_{n}}\tilde{A}(n/k_{n})\frac{2\gamma}{1-\rho }\longrightarrow0\quad \mbox{a.s.} \end{aligned}$$
Together with the asymptotic properties of \(M_{k_{n}}^{(2)}\) obtained again from Corollary A.2, this implies that
$$\begin{aligned} \sqrt{k_{n}}\big(M_{k_{n}}^{(2)}-2\hat{\gamma}_{k_{n}}^{2}\big)-2\gamma ^{2}\big(P_{1}^{(2)}-2P_{1}^{(1)}\big)-\sqrt{k_{n}}\tilde{A}(n/k_{n})\frac {2\gamma\rho}{(1-\rho)^{2}}\longrightarrow0\quad \mbox{a.s.} \end{aligned}$$
Thus, the asymptotic unbiased estimator has the expansion, almost surely as \(n\to\infty\),
$$\begin{aligned} &\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n},k_{\rho},\alpha}-\gamma\right ) \\ &=\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}-\gamma\right)-\frac{1}{2\hat{\gamma}_{k_{n}}\hat{\rho}_{k_{\rho}}^{(\alpha)}(1-\hat{\rho}_{k_{\rho}}^{(\alpha)})^{-1}}\sqrt{k_{n}}\big(M_{k_{n}}^{(2)}-2\hat{\gamma }_{k_{n}}^{2}\big) \\ &=\gamma P_{1}^{(1)}+\sqrt{k_{n}}\tilde{A}(n/k_{n})\frac{1}{1-\rho } \\ &\phantom{=:}-\frac{1}{2\hat{\gamma}_{k_{n}}\hat{\rho}_{k_{\rho}}^{(\alpha)}(1-\hat{\rho}_{k_{\rho}}^{(\alpha)})^{-1}} \left(2\gamma^{2}\big(P_{1}^{(2)}-2P_{1}^{(1)}\big)+\sqrt{k_{n}}\tilde {A}(n/k_{n})\frac{2\gamma\rho}{(1-\rho)^{2}}\right) \\ &=\gamma P_{1}^{(1)}-\frac{\gamma(1-\hat{\rho}_{k_{\rho}}^{(\alpha )})}{\hat{\rho}_{k_{\rho}}^{(\alpha)}} \big(P_{1}^{(2)}-2P_{1}^{(1)}\big) \\ &\phantom{=:}+\sqrt{k_{n}}\tilde{A}(n/k_{n})\frac{\rho}{(1-\rho)^{2}} \bigg(\frac{1-\rho}{\rho}-\frac{1-\hat{\rho}_{k_{\rho}}^{(\alpha )}}{\hat{\rho}_{k_{\rho}}^{(\alpha)}}\bigg). \end{aligned}$$
(A.4)
In the last step, we use the fact that \(\hat{\gamma}_{k_{n}}\to\gamma\; \mbox{a.s.}\) as \(n\to\infty\). Further, the relation \(k_{n}/k_{\rho}\to 0\) implies that \(\frac{\sqrt{k_{n}}\tilde{A}(n/k_{n})}{\sqrt{k_{\rho}}\tilde{A}(n/k_{\rho})}\to0\) as \(n\to\infty\). Thus, according to Proposition A.3 and Cramér’s delta method, we get that as \(n\to\infty\),
$$\begin{aligned} \sqrt{k_{n}}\tilde{A}(n/k_{n})\frac{\rho}{(1-\rho)^{2}}\bigg(\frac {1-\rho}{\rho}-\frac{1-\hat{\rho}_{k_{\rho}}^{(\alpha)}}{\hat{\rho}_{k_{\rho}}^{(\alpha)}}\bigg)\stackrel{\mathbb{P}}{\longrightarrow} 0. \end{aligned}$$
Together with the consistency of \(\hat{\rho}_{k_{\rho}}^{(\alpha)}\), the expansion (A.4) implies that as \(n\to\infty\),
$$\begin{aligned} \sqrt{k_{n}}\left(\hat{\gamma}_{k_{n},k_{\rho},\alpha}-\gamma\right )\stackrel{\mathbb{P}}{\longrightarrow}\dfrac{\gamma}{\rho}\big(P_{1}^{(1)}(2-\rho)+P_{1}^{(2)}(\rho-1)\big). \end{aligned}$$
The theorem is proved by using the covariance structure of \((P_{1}^{(1)}, P_{1}^{(2)})\) given in Corollary A.2. □
Proof of Theorem 4.2
Denote \(d_{n}:=k_{n}/(np_{n})\) and \(T_{n}=\frac{(M_{k_{n}}^{(2)}-2\hat{\gamma}_{k_{n}}^{2})(1-\hat{\rho}_{k_{\rho}}^{(\alpha)})^{2}}{2\hat{\gamma}_{k_{n}}\{ \hat{\rho}_{k_{\rho}}^{(\alpha)}\}^{2}}\). With \(P_{1}^{(\alpha)}\) defined in Corollary A.2, following the lines of the proof of Theorem 4.1, we obtain that under the same Skorohod construction as in Proposition A.1,
$$ \sqrt{k_{n}}\bigg(T_{n}-\frac{\tilde{A}(\frac{n}{k_{n}})}{\rho}\bigg)-\frac{\gamma(1-\rho)^{2}}{\rho^{2}}\big(P_{1}^{(2)}-2P_{1}^{(1)}\big)\longrightarrow0 \quad \mbox{a.s.} $$
as \(n\to\infty\), which implies that \(T_{n}\to0\) a.s. Together with \(\sqrt{k_{n}}\tilde{A}^{2}(\frac{n}{k_{n}})\to0\) as required in condition (2.8), we have the stronger result that as \(n\to \infty\),
$$ \sqrt{k_{n}}\tilde{A}\left(\frac{n}{k_{n}}\right)T_{n}\longrightarrow 0\quad \text{a.s.} $$
(A.5)
Consider the expansion
$$\begin{aligned} &\frac{\sqrt{k_{n}}}{\log d_{n}}\left(\frac{\hat{x}_{k_{n},k_{\rho},\alpha }(p_{n})}{x\left(p_{n}\right)}-1\right)\\ &=\frac{\sqrt{k_{n}}}{\log d_{n}}\bigg(\frac{X_{n-k_{n},n}d_{n}^{\hat{\gamma}_{k_{n},k_{\rho},\alpha}}}{x\left(p_{n}\right)}-1\bigg)\left (1-T_{n}\right)-\frac{\sqrt{k_{n}}}{\log d_{n}}T_{n}\\ &=\frac{d_{n}^{\gamma}U(\frac{n}{k_{n}})}{U(\frac{1}{p_{n}})} \bigg(\frac {\sqrt{k_{n}}}{\log d_{n}}\Big(\frac{X_{n-k_{n},n}}{U(\frac {n}{k_{n}})}-1\Big)d_{n}^{\hat{\gamma}_{k_{n},k_{\rho},\alpha}-\gamma} \\ &\phantom{=\frac{d_{n}^{\gamma}U(\frac{n}{k_{n}})}{U(\frac{1}{p_{n}})} \bigg(:}+\frac{\sqrt{k_{n}}}{\log d_{n}}\Big(d_{n}^{\hat{\gamma}_{k_{n},k_{\rho},\alpha}-\gamma}-1\Big)\bigg)\ (1-T_{n})\\ &\phantom{=:}-\frac{\sqrt{k_{n}}}{\log d_{n}}\bigg(T_{n}-\frac{\tilde{A}(\frac{n}{k_{n}})}{\rho}\bigg)+T_{n} \frac{\sqrt{k_{n}}\tilde{A}(\frac {n}{k_{n}})}{\log d_{n}}\frac{\frac{U(\frac{1}{p_{n}})d_{n}^{-\gamma }}{U(\frac{n}{k_{n}})}-1}{\tilde{A}(\frac{n}{k_{n}})}\frac{d_{n}^{\gamma}U(\frac{n}{k_{n}})}{U(\frac{1}{p_{n}})}\\ &\phantom{=:}-\frac{\sqrt{k_{n}}\tilde{A}^{2}(\frac{n}{k_{n}})}{\log d_{n}}\frac{\frac{U(\frac{1}{p_{n}})d_{n}^{-\gamma}}{U(\frac {n}{k_{n}})}-1}{\tilde{A}(\frac{n}{k_{n}})}\frac{\frac{d_{n}^{\gamma}U(\frac {n}{k_{n}})}{U(\frac{1}{p_{n}})}-1}{\tilde{A}(\frac{n}{k_{n}})}\\ &\phantom{=:}-\frac{\sqrt{k_{n}}\tilde{A}(\frac{n}{k_{n}})(\tilde{A}(\frac{n}{k_{n}})+\tilde{B}(\frac{n}{k_{n}}))}{\log d_{n}}\frac{\frac {\frac{U(\frac{1}{p_{n}})d_{n}^{-\gamma}}{U(\frac{n}{k_{n}})}-1}{\tilde{A}(\frac{n}{k_{n}})}+\frac{1}{\rho}}{\tilde{A}(\frac{n}{k_{n}})+\tilde{B}(\frac{n}{k_{n}})}\\ &=: \,I_{1}-I_{2}+I_{3}-I_{4}-I_{5}. \end{aligned}$$
The third order condition in (2.6) implies that as \(n\to\infty\),
$$ \Bigg\vert \frac{\frac{U(\frac{1}{p_{n}})d_{n}^{-\gamma}}{U(\frac {n}{k_{n}})}-1}{\tilde{A}(\frac{n}{k_{n}})}+\frac{1}{\rho}\Bigg\vert =O\bigg(\tilde{A}\Big(\frac{n}{k_{n}}\Big)+\tilde{B}\Big(\frac {n}{k_{n}}\Big)\bigg). $$
(A.6)
The limit relation in (A.6) further implies that as \(n\to\infty\),
$$\begin{aligned} \frac{\frac{U(\frac{1}{p_{n}})d_{n}^{-\gamma}}{U(\frac {n}{k_{n}})}-1}{\tilde{A}(\frac{n}{k_{n}})}\longrightarrow-\frac{1}{\rho }\qquad\ \mbox{and}\ \qquad\frac{U(\frac{1}{p_{n}})d_{n}^{-\gamma }}{U(\frac{n}{k_{n}})}\longrightarrow1. \end{aligned}$$
Combining (A.6) with condition (2.8), we get that \(I_{4}\to0\) and \(I_{5}\to0\) as \(n\to\infty\). Next, from (A.5), we get that \(I_{2}\to0\) and \(I_{3}\to0\;\mbox{a.s.}\), as \(n\to\infty\).
Lastly, we deal with the term \(I_{1}\). Denote the limit of \(\sqrt {k_{n}}(\hat{\gamma}_{k_{n},k_{\rho},\alpha}-\gamma)\) as \(\Gamma\). Then we have that as \(n\to\infty\),
$$\begin{aligned} \frac{\sqrt{k_{n}}}{\log d_{n}}\Big(d_{n}^{\hat{\gamma}_{k_{n},k_{\rho},\alpha}-\gamma}-1\Big)\longrightarrow\Gamma\quad \mbox{a.s.}, \end{aligned}$$
which yields \(\frac{1}{\log d_{n}}d_{n}^{\hat{\gamma}_{k_{n},k_{\rho},\alpha }-\gamma}\to0\;\mbox{a.s.}\) Together with the facts that \(T_{n}\to0\; \mbox{a.s.}\) and
$$\begin{aligned} \sqrt{k_{n}}\bigg(\frac{X_{n-k_{n},n}}{U(\frac{n}{k_{n}})}-1\bigg)=O(1)\quad \mbox{a.s.} \end{aligned}$$
as \(n\to\infty\), we get that \(I_{1}\to\Gamma\;\mbox{a.s.}\) as \(n\to \infty\). The theorem is proved by combining the limit properties of the five terms in the expansion. □