1 Introduction

Let \(\{X_{n},n\geq1\}\) be a sequence of independent and identically distributed random variables with common distribution function (df) \(F(x)\). Suppose that there exist constants \(a_{n}>0 \), \(b_{n}\in\mathbb{R}\) and a non-degenerate distribution \(G(x)\) such that

$$ \lim_{n\rightarrow\infty}P(M_{n}\leq a_{n} x+b_{n})=\lim_{n\rightarrow \infty}F^{n}(a_{n} x+b_{n})=G(x) $$
(1.1)

for all \(x\in C(G)\), the set of all continuity points of G, where \(M_{n}=\max_{1\leq i\leq n}X_{i}\) denotes the largest of the first n. Then \(G(x)\) must belong to one of the following three classes:

$$\begin{aligned}& \Phi_{\alpha}(x)= \textstyle\begin{cases} 0, &\mbox{if } x< 0, \\ \exp\{-x^{-\alpha}\}, &\mbox{if } x\geq0, \end{cases}\displaystyle \\& \Psi_{\alpha}(x)= \textstyle\begin{cases} \exp\{-(-x)^{\alpha}\}, &\mbox{if } x< 0, \\ 1, &\mbox{if } x\geq0, \end{cases}\displaystyle \\& \Lambda(x)=\exp\bigl\{ -e^{-x}\bigr\} ,\quad x\in\mathbb{R}, \end{aligned}$$

where α is one positive parameter. We say that F is in the max domain of attraction of G if (1.1) holds, denoted by \(F\in D_{l}(G)\). Criteria for \(F\in D_{l}(G)\) and the choice of normalizing constants \(a_{n}\) and \(b_{n}\) can be found in Galambos [1], Leadbetter et al. [2], Resnick [3], and De Haan and Ferreira [4].

The limit distributions of maxima under power normalization was first derived by Pancheva [5]. A df F is said to belong to the max domain of attraction of a non-degenerate df H under power normalization, written as \(F\in D_{p}(H)\), if there exist constants \(\alpha_{n}>0\) and \(\beta_{n}>0\) such that

$$ \lim_{n\rightarrow \infty}P\biggl(\biggl\vert \frac{M_{n}}{\alpha_{n}} \biggr\vert ^{\frac{1}{\beta _{n}}}\operatorname{sign}(M_{n})\leq x\biggr)=\lim _{n\rightarrow \infty}F^{n}\bigl(\alpha_{n}|x|^{\beta_{n}} \operatorname{sign}(x)\bigr)=H(x), $$
(1.2)

where \(\operatorname{sign}(x)=-1,0\mbox{ or }1\) according to \(x<0\), \(x=0\) or \(x>0\). Pancheva [5] showed that H can be only of power type of the df’s, that is,

$$\begin{aligned}& H_{1,\alpha}(x)= \textstyle\begin{cases} 0, &\mbox{if } x\leq1, \\ \exp\{-(\log x)^{-\alpha}\}, &\mbox{if } x>1, \end{cases}\displaystyle \\& H_{2,\alpha}(x)= \textstyle\begin{cases} 0, &\mbox{if } x\leq0, \\ \exp\{-(-\log x)^{\alpha}\}, &\mbox{if } 0< x< 1, \\ 1, &\mbox{if } x\geq1, \end{cases}\displaystyle \\& H_{3,\alpha}(x)= \textstyle\begin{cases} 0, &\mbox{if } x\leq-1, \\ \exp\{-(-\log(-x))^{-\alpha}\}, &\mbox{if } {-}1< x< 0, \\ 1, &\mbox{if } x\geq0, \end{cases}\displaystyle \\& H_{4,\alpha}(x)= \textstyle\begin{cases} \exp\{-(\log(-x))^{\alpha}\}, &\mbox{if } x< -1, \\ 1, &\mbox{if } x\geq-1, \end{cases}\displaystyle \\& H_{5,\alpha}(x)=\Phi_{1}(x)= \textstyle\begin{cases} 0, &\mbox{if } x\leq0, \\ \exp\{-x^{-1}\}, &\mbox{if } x>0, \end{cases}\displaystyle \\& H_{6,\alpha}(x)=\Psi_{1}(x)= \textstyle\begin{cases} \exp\{x\}, &\mbox{if } x< 0, \\ 1, &\mbox{if } x\geq0, \end{cases}\displaystyle \end{aligned}$$

where α is a positive parameter. Necessary and sufficient conditions for F to satisfy (1.2) have been given by Christoph and Falk [6], Mohan and Ravi [7], Mohan and Subramanya [8] and Subramanya [9].

The logarithmic normal distribution (lognormal distribution for short) is one of the most widely applied distributions in statistics, biology, and some other disciplines. In this paper, we are interested in considering the uniform rate of convergence of (1.2) with \(X_{n}\) following the lognormal distribution. The probability density function of the lognormal distribution is given by

$$ F'(x)=\frac{x^{-1}}{\sqrt{2\pi}}\exp \biggl\{ -\frac{(\log x)^{2}}{2} \biggr\} , \quad x>0. $$

One interesting problem in extreme value analysis is to estimate the rate of uniform convergence of \(F^{n}(\cdot)\) to its extreme value distribution. For a power normalization, Chen et al. [10] derived the convergence rates of the distribution of maxima for random variables obeying the general error distribution. For convergence rates of distributions of extremes under linear normalization, see De Haan and Resnick [11] under second-order regular variation and for special cases see Hall [12] and Nair [13] for the normal distribution, which also is extended to those such as general error distribution, logarithmic general error distribution, see recent work of Peng et al. [14] and Liao and Peng [15]. For other related work on the convergence rates of some given distributions, see Castro [16] for the gamma distribution, Lin et al. [17] for the short-tailed symmetric distribution due to Tiku and Vaughan [18], and Liao et al. [19] for the skew normal distribution which extended the results of Nair [13]. The aim of this paper is to study the uniform and point-wise convergence rates of the distribution of power normalized maxima to its limits, respectively.

The contents of this article is organized as follows: some auxiliary results are given in Section 2. In Section 3, we provide our main results with related proofs deferred to Section 4.

2 Preliminaries

To prove our results, we first cite some results from Liao and Peng [15] and Mohan and Ravi [7].

In the sequel, let \(\{X_{n},n\geq1\}\) be a sequence of independent identically distributed random variables with common df F which follows the lognormal distribution. As before, let \(M_{n}=\max_{1\leq i\leq n}X_{i}\) represent the partial maximum of \(\{X_{n},n\geq1\}\). Liao and Peng [15] defined

$$ a_{n}=\frac{\exp ((2\log n)^{1/2} )}{(2\log n)^{1/2}}, \qquad b_{n}=\bigl( \exp\bigl((2\log n)^{1/2}\bigr)\bigr) \biggl(1-\frac{\log4\pi+\log\log n}{2(2\log n)^{1/2}} \biggr), $$
(2.1)

and they obtained

$$ \lim_{n\rightarrow\infty}P\bigl((M_{n}-b_{n})/a_{n} \leq x\bigr)=\exp\bigl(-e^{-x}\bigr)=:\Lambda(x). $$
(2.2)

From (2.2) we immediately derive \(F\in D_{l}(\Lambda)\). The following Mills ratio of the lognormal distribution is due to Liao and Peng [15]:

$$ \frac{1-F(x)}{F'(x)}\sim\frac{x}{\log x}, $$
(2.3)

as \(x\rightarrow\infty\), where \(F'(x)\) is the density function of the lognormal distribution \(F(x)\). According to Liao and Peng [15], we have

$$ 1-F(x)=c(x)\exp{ \biggl(-\int^{x}_{e}\frac{g(t)}{f(t)}\, \mathrm {d}t\biggr)} $$

for sufficiently large x, where \(c(x)\rightarrow(2\pi e)^{-1/2}\) as \(x\rightarrow\infty\), \(g(x)=1+(\log x)^{-2}\) and

$$ f(x)=\frac{x}{\log x}. $$
(2.4)

Note that \(f'(x)\rightarrow0\) and \(g(x)\rightarrow1\) as \(x\rightarrow\infty\).

Lemma 2.1

[15]

Let F denote the lognormal distribution function. Then

$$\begin{aligned} 1-F(x)&=\frac{1}{\sqrt{2\pi}}(\log x)^{-1}\exp{\biggl(- \frac {(\log x)^{2}}{2}\biggr)}-\gamma(x) \end{aligned}$$
(2.5)
$$\begin{aligned} &=\frac{1}{\sqrt{2\pi}}(\log x)^{-1}\exp{\biggl(- \frac {(\log x)^{2}}{2}\biggr)} \bigl(1-(\log x)^{-2} \bigr)+\mathcal{S}(x) \end{aligned}$$
(2.6)

for \(x>1\), where

$$ 0< \gamma(x)< \frac{1}{\sqrt{2\pi}}(\log x)^{-3}\exp{\biggl(- \frac{(\log x)^{2}}{2}\biggr)} $$
(2.7)

and

$$ 0< \mathcal{S}(x)< \frac{3}{\sqrt{2\pi}}(\log x)^{-5}\exp{ \biggl(-\frac{(\log x)^{2}}{2}\biggr)}. $$
(2.8)

In order to obtain the main results, we need the following two lemmas.

Lemma 2.2

[7]

Let F denote a df and \(r(F)=\sup\{x:F(x)<1\}\). Suppose that \(F\in D_{l}(\Lambda)\) and \(r(F)=\infty\), then \(F\in D_{p}(\Phi_{1})\), where normalizing constants \(\alpha_{n}=b_{n}\), \(\beta_{n}=a_{n}/b_{n}\).

Lemma 2.3

[7]

Let F denote a df, if \(F\in D_{p}(\Phi_{1})\) if and only if

  1. (i)

    \(r(F)>0\), and

  2. (ii)

    \(\lim_{t\uparrow r(F)}\frac{1-F( t\exp(y\bar{f}(t)))}{1-F(t)}=e^{-y}\), for some positive valued function .

If (ii) holds for some , then \(\int^{r(F)}_{a}((1-F(x))/x) \, \mathrm{d}x<\infty\) for \(0< a< r(F)\) and (ii) holds with the choice \(\bar{f}(t)=\int^{r(F)}_{t}((1-F(x))/x) \, \mathrm{d}x/(1-F(t))\). The normalizing constants may be chosen as \(\alpha_{n}=F^{\leftarrow}(1-1/n)\) and \(\beta_{n}=\bar{f}(\alpha_{n})\), where \(F^{\leftarrow}(x)=\inf\{y: F(y)\geq x\}\).

Theorem 2.1

Let \(\{X_{n},n\geq1\}\) be a sequence of independent identically distributed lognormal random variables. Then \(F\in D_{p}(\Phi_{1})\) and the normalizing constants can be chosen as \(\alpha^{*}_{n}=b_{n}\), \(\beta^{*}_{n}=a_{n}/b_{n}\), where \(a_{n}\) and \(b_{n}\) are given by (2.1).

Proof

Note that F follows the lognormal distribution, which implies \(F\in D_{p}(\Phi_{1})\) and \(\alpha^{*}_{n}=b_{n}\), \(\beta^{*}_{n}=a_{n}/b_{n}\) by Lemma 2.2, where \(a_{n}\) and \(b_{n}\) are defined by (2.1). □

By Lemma 2.3 and (2.3) and combining with Proposition 1.1(a) in [3], a natural way to choose constants \(\alpha_{n}\) and \(\beta_{n}\) is to solve the following equations:

$$ 2\pi(\log\alpha_{n})^{2}\exp\bigl((\log \alpha_{n})^{2}\bigr)=n^{2} $$
(2.9)

and

$$ \beta_{n}=\frac{f(\alpha_{n})}{\alpha_{n}}=\frac{1}{\log\alpha_{n}}, $$
(2.10)

where f is given by (2.4). The solution of (2.9) may be expressed as

$$ \alpha_{n}=\bigl(\exp\bigl((2\log n)^{1/2} \bigr)\bigr) \biggl(1-\frac{\log4\pi+\log\log n}{2(2\log n)^{1/2}}+o\biggl(\frac{1}{(\log n)^{1/2}}\biggr) \biggr) $$
(2.11)

and we easily check that \(\beta_{n}\sim(2\log n)^{-1/2}\).

3 Main results

In this section, we give two main results. Theorem 3.1 proves the result that the rate of uniform convergence of \(F^{n}(\alpha_{n}x^{\beta_{n}})\) to its extreme value limit is proportional to \(1/\log n\). Theorem 3.2 establishes the result that the point-wise rate of convergence of \(|M_{n}/\alpha_{n}|^{1/\beta_{n}}\operatorname{sign}(M_{n})\) to the extreme value df \(\exp(-x^{-1})\) is of the order of \(O(x^{-1}(\log x)^{2} e^{-1/x}(\log n)^{-1})\).

Theorem 3.1

Let \(\{X_{n},n\geq1\}\) denote an independent identically distributed random variables sequence with common df F following the lognormal distribution. Then there exist absolute constants \(0<\mathcal{C}_{1}<\mathcal{C}_{2}\) such that

$$\frac{\mathcal{C}_{1}}{\log n}< \sup_{x> 0}\bigl\vert F^{n} \bigl(\alpha_{n}x^{\beta_{n}}\bigr)-\Phi_{1}(x)\bigr\vert < \frac{\mathcal{C}_{2}}{\log n} $$

for large \(n>n_{0}\), where \(\alpha_{n}\) and \(\beta_{n}\) are determined by (2.9) and (2.10), respectively.

Theorem 3.2

Let \(\alpha_{n}\) and \(\beta_{n}\) be given by (2.9) and (2.10). Then, for fixed \(x>0\),

$$\bigl\vert F^{n}\bigl(\alpha_{n}x^{\beta_{n}}\bigr)- \Phi_{1}(x)\bigr\vert \sim x^{-1}e^{-1/x} \biggl(1+ \biggl(1+\frac{1}{2}\log x \biggr)\log x \biggr)\frac{1}{2\log n}, $$

as \(n\rightarrow\infty\).

4 Proofs

First of all, we provide the proof of Theorem 3.2, for it is relatively easy.

Proof of Theorem 3.2

By Lemma 2.1, we have

$$\begin{aligned} 1-F\bigl(\alpha_{n}x^{\beta_{n}}\bigr) =&\frac{1}{\sqrt{2\pi}}\bigl( \log \bigl(\alpha_{n}x^{\beta_{n}}\bigr)\bigr)^{-1}\exp \biggl(-\frac{(\log (\alpha_{n}x^{\beta_{n}}))^{2}}{2}\biggr) \\ &{}\times\bigl(1-\bigl(\log \bigl(\alpha_{n}x^{\beta_{n}}\bigr) \bigr)^{-2}\bigr)+\mathcal{S}\bigl(\alpha_{n}x^{\beta_{n}} \bigr) \\ =:&T_{1}(x)T_{2}(x)+T_{3}(x) \end{aligned}$$

for \(x>0\), where \(T_{1}(x)=\frac{1}{\sqrt{2\pi}}(\log (\alpha_{n}x^{\beta_{n}}))^{-1}\exp(-\frac{(\log (\alpha_{n}x^{\beta_{n}}))^{2}}{2})\), \(T_{2}(x)=1-(\log (\alpha_{n}x^{\beta_{n}}))^{-2}\) and \(T_{3}(x)=\mathcal {S}(\alpha_{n}x^{\beta_{n}})\).

First, we calculate \(T_{1}(x)\). By (2.9) and (2.10), we have

$$\begin{aligned} T_{1}(x) =&\frac{1}{\sqrt{2\pi}}(\log\alpha_{n})^{-1} \exp{ \biggl(-\frac {(\log \alpha_{n})^{2}}{2} \biggr)}\bigl(1+(\log\alpha_{n})^{-1} \beta_{n}\log x\bigr)^{-1} \\ &{}\times\exp \biggl(-(\log\alpha_{n})\beta_{n}\log x- \frac{\beta^{2}_{n}\log^{2}x}{2} \biggr) \\ =&\frac{1}{nx}\bigl(1+\beta^{2}_{n}\log x \bigr)^{-1}\exp \biggl(-\frac {\beta^{2}_{n}\log^{2}x}{2} \biggr) \\ =&\frac{1}{nx}\bigl(1-\beta^{2}_{n}\log x+O\bigl( \beta^{4}_{n}\bigr)\bigr) \biggl(1-\frac{\beta^{2}_{n}\log^{2}x}{2}+O\bigl( \beta ^{4}_{n}\bigr) \biggr) \\ =&\frac{1}{nx} \biggl(1-\beta^{2}_{n} \biggl(1+\frac{1}{2}\log x\biggr)\log x+O\bigl(\beta^{4}_{n} \bigr) \biggr). \end{aligned}$$
(4.1)

Second, we estimate \(T_{2}(x)\) and \(T_{3}(x)\) for \(x>0\). By (2.10), we derive

$$\begin{aligned} T_{2}(x) &=1-\beta^{2}_{n}\bigl(1+ \beta^{2}_{n}\log x\bigr)^{-2} \\ &=1-\beta^{2}_{n}\bigl(1-2\beta^{2}_{n} \log x+O\bigl(\beta^{4}_{n}\bigr)\bigr) \\ &=1-\beta^{2}_{n}+O\bigl( \beta^{4}_{n}\bigr), \end{aligned}$$
(4.2)

and by Lemma 2.1 we have

$$\begin{aligned} T_{3}(x)&\leq\frac{3}{\sqrt{2\pi}}\bigl(\log \bigl(\alpha_{n}x^{\beta_{n}} \bigr)\bigr)^{-5}\exp{ \biggl(-\frac{(\log (\alpha_{n}x^{\beta_{n}}))^{2}}{2} \biggr)} \\ &=3\beta^{4}_{n}\bigl(1+\beta^{2}_{n} \log x\bigr)^{-4}T_{1}(x) \\ &=O\bigl(n^{-1}\beta^{4}_{n}\bigr). \end{aligned}$$
(4.3)

By (4.1)-(4.3), we have

$$ 1-F^{n}\bigl(\alpha_{n}x^{\beta_{n}}\bigr)= \frac{1}{nx} \biggl(1-\beta ^{2}_{n}\biggl(1+\biggl(1+ \frac{1}{2}\log x\biggr)\log x\biggr)+O\bigl(\beta^{4}_{n} \bigr) \biggr). $$

Thus, we obtain

$$\begin{aligned}& F^{n}\bigl(\alpha_{n}x^{\beta_{n}}\bigr)- \Phi_{1}(x) \\& \quad = \biggl(1-\frac{1}{nx}\biggl(1-\beta ^{2}_{n}\biggl(1+\biggl(1+\frac{1}{2}\log x\biggr)\log x\biggr)+O\bigl(\beta^{4}_{n}\bigr)\biggr) \biggr)^{n}-\exp\biggl(-\frac{1}{x}\biggr) \\& \quad =\exp\biggl(-\frac{1}{x}\biggr) \biggl(\exp\biggl(\frac{1}{x} \biggl(\beta^{2}_{n}\biggl(1+\biggl(1+\frac {1}{2}\log x \biggr)\log x\biggr)+O\bigl(\beta^{4}_{n}\bigr)\biggr) \biggr)-1 \biggr) \\& \quad =\exp\biggl(-\frac{1}{x}\biggr) \biggl( \beta^{2}_{n}\frac{1}{x}\biggl(1+\biggl(1+ \frac {1}{2}\log x\biggr)\log x\biggr)+O\bigl(\beta^{4}_{n} \bigr) \biggr) \end{aligned}$$
(4.4)

for large n and \(x>0\). We immediately get the result of Theorem 3.2 by (4.4). □

Proof of Theorem 3.1

By Theorem 3.2 we can prove that there exists an absolute constant \(\mathcal{C}_{1}\) such that

$$\sup_{x>0}\bigl\vert F^{n}\bigl( \alpha_{n}x^{\beta_{n}}\bigr)-\Phi_{1}(x)\bigr\vert > \frac{\mathcal {C}_{1}}{\log n}. $$

In order to obtain the upper bound for \(x>0\), we need to prove

$$\begin{aligned} (\mathrm{a})&\quad \sup_{1\leq x< \infty}\bigl\vert F^{n}\bigl(\alpha_{n}x^{\beta_{n}}\bigr)- \Phi_{1}(x)\bigr\vert < d_{1}\beta^{2}_{n}, \end{aligned}$$
(4.5)
$$\begin{aligned} (\mathrm{b})&\quad \sup_{c_{n}\leq x< 1}\bigl\vert F^{n}\bigl(\alpha_{n}x^{\beta_{n}}\bigr)- \Phi_{1}(x)\bigr\vert < d_{2}\beta^{2}_{n}, \end{aligned}$$
(4.6)
$$\begin{aligned} (\mathrm{c})&\quad \sup_{0< x< c_{n}}\bigl\vert F^{n}\bigl(\alpha_{n}x^{\beta_{n}}\bigr)- \Phi_{1}(x)\bigr\vert < d_{3}\beta^{2}_{n} \end{aligned}$$
(4.7)

for \(n>n_{0}\), where \(d_{i}>0\), \(i=1,2,3\) are absolute constants and

$$c_{n}=\frac{1}{2\log\log\alpha_{n}} $$

is positive for \(n>n_{0}\). By (2.9), we have

$$0.4(2\log n)^{1/2}< \log\alpha_{n}< (2\log n)^{1/2} $$

for \(n>n_{0}\).

First, consider the case of \(x\geq c_{n}\). Set

$$\begin{aligned}& R_{n}(x)=-\bigl[n\log F\bigl(\alpha_{n}x^{\beta_{n}} \bigr)+n\Psi_{n}(x)\bigr], \\& B_{n}(x)= \exp(-R_{n}),\qquad A_{n}(x)=\exp \biggl(-n\Psi_{n}(x)+ \frac{1}{x}\biggr), \end{aligned}$$

where \(\Psi_{n}(x)=1-F(\alpha_{n}x^{\beta_{n}})\) and \(A_{n}(x)\rightarrow1\), as \(x\rightarrow\infty\). We have

$$\begin{aligned} \Psi_{n}(x)&\leq\Psi_{n}(c_{n})< \frac{1}{\sqrt{2\pi}} \bigl(\log \bigl(\alpha_{n}c^{\beta_{n}}_{n}\bigr) \bigr)^{-1}\exp{ \biggl(-\frac{(\log (\alpha_{n}c^{\beta_{n}}_{n}))^{2}}{2} \biggr)} \\ &=\frac{1}{n}\bigl(1+\beta^{2}_{n}\log c_{n}\bigr)^{-1}\exp \biggl(-\log c_{n}- \frac{\beta^{2}_{n}\log^{2}c_{n}}{2} \biggr) \\ &< \frac{1}{n}\bigl(1+\beta^{2}_{n}\log c_{n}\bigr)^{-1}c^{-1}_{n} \\ &= \biggl(1-\frac{\log(2\log\log\alpha_{n})}{(\log\alpha _{n})^{2}} \biggr)^{-1}\frac{2\log\log\alpha_{n}}{n} \\ &< \tilde{c}_{4}< 1 \end{aligned}$$

for \(n>n_{0}\). So,

$$\inf_{x>c_{n}}\bigl(1-\Psi_{n}(x)\bigr)>1- \tilde{c}_{4}>0. $$

Since

$$-x-\frac{x^{2}}{2(1-x)}< \log(1-x)< -x $$

for \(0< x<1\), we obtain

$$\begin{aligned} 0&< R_{n}(x)\leq\frac{n\Psi^{2}_{n}(x)}{2(1-\Psi_{n}(x))}< \frac {n\Psi^{2}_{n}(c_{n})}{2(1-\Psi_{n}(x))} \\ &< \frac{n^{-1}(1+\beta^{2}_{n}\log c_{n})^{-2}c^{-2}_{n}}{2(1-\Psi_{n}(x))} \\ &< \frac{n^{-1}(1+\beta^{2}_{n}\log c_{n})^{-2}c^{-2}_{n}(\log\alpha_{n})^{2}}{2(1-\tilde{c}_{4})\beta^{-2}_{n}} \\ &=\frac{2}{\sqrt{2\pi}(1-\tilde{c}_{4})} \biggl(1-\frac{\log (2\log\log\alpha_{n})}{(\log\alpha_{n})^{2}} \biggr)^{-2} \frac{(\log \log\alpha_{n})^{2}\log\alpha_{n}}{\exp(\frac{(\log\alpha _{n})^{2}}{2})}\beta^{2}_{n} \\ &< \tilde{c}_{5}\beta^{2}_{n} \end{aligned}$$

for \(n>n_{0}\).

Hence, we have

$$n^{-1}\beta^{-2}_{n}\bigl(1+\beta^{2}_{n} \log c_{n}\bigr)^{-2}c^{-2}_{n}< \tilde{c}_{6} $$

for \(n>n_{0}\). Thus,

$$ \bigl\vert B_{n}(x)-1\bigr\vert < R_{n}< \tilde{c}_{5}\beta^{2}_{n} $$
(4.8)

for \(n>n_{0}\). By (4.8), we have

$$\begin{aligned}& \bigl\vert F^{n}\bigl(\alpha_{n}x^{\beta_{n}}\bigr)- \Phi_{1}(x)\bigr\vert \\& \quad \leq\Phi_{1}(x)B_{n}(x)\bigl\vert A_{n}(x)-1\bigr\vert +\bigl\vert B_{n}(x)-1\bigr\vert \\& \quad < \Phi_{1}(x)\bigl\vert A_{n}(x)-1\bigr\vert + \tilde{c}_{5}\beta^{2}_{n} \end{aligned}$$
(4.9)

for \(x\geq c_{n}\).

We now prove (4.5). By (2.9), (2.10), and the definition of \(A_{n}(x)\), we have

$$ A'_{n}(x)=A_{n}(x)\frac{1}{x^{2}} \biggl( \exp\biggl(-\frac{1}{2}\beta ^{2}_{n} \log^{2}x\biggr)-1 \biggr) < 0 $$

for \(x>1\). Since

$$\begin{aligned}& 0< n\gamma(\alpha_{n})< \beta^{2}_{n}\quad \mbox{and} \quad e^{x}-1\leq xe^{x}\quad \mbox{for } 0\leq x \leq1\quad \mbox{and} \\& \exp\bigl(n\gamma(\alpha_{n})\bigr)< \exp\bigl(\beta^{2}_{n} \bigr)< \exp\biggl(\frac {25}{8\log n}\biggr)< \exp\biggl(\frac{25}{8\log n_{0}}\biggr) \quad \mbox{for } n>n_{0}, \end{aligned}$$

and by (2.5), (2.9), we have

$$\begin{aligned} \sup_{x\geq1}\bigl\vert A_{n}(x)-1\bigr\vert &= \bigl\vert A_{n}(1)-1\bigr\vert \\ &=\bigl\vert \exp\bigl(n\gamma(\alpha_{n})\bigr)-1\bigr\vert \\ &\leq n\gamma(\alpha_{n})\exp\bigl(n\gamma(\alpha_{n})\bigr) \\ &\leq\tilde{c}_{7}\beta^{2}_{n} \end{aligned}$$
(4.10)

for \(n>n_{0}\).

Combining (4.9) with (4.10), we have

$$ \sup_{x\geq1}\bigl\vert F^{n}\bigl( \alpha_{n}x^{\beta_{n}}\bigr)-\Phi_{1}(x)\bigr\vert < ( \tilde {c}_{5}+\tilde{c}_{7})\beta^{2}_{n}. $$

Second, consider the situation of \(c_{n}\leq x<1\). By Lemma 2.1, we obtain

$$\begin{aligned} -n\Psi_{n}(x)+\frac{1}{x} =&-n \biggl(\frac{1}{\sqrt{2\pi }}\bigl( \log \bigl(\alpha_{n}x^{\beta_{n}}\bigr)\bigr)^{-1}\exp \biggl(-\frac{(\log (\alpha_{n}x^{\beta_{n}}))^{2}}{2}\biggr)-\gamma\bigl(\alpha_{n}x^{\beta_{n}} \bigr) \biggr)+\frac{1}{x} \\ =&-n \biggl(\frac{1}{\sqrt{2\pi}}\bigl(\log \bigl(\alpha_{n}x^{\beta_{n}} \bigr)\bigr)^{-1}\exp\biggl(-\frac{(\log (\alpha_{n}x^{\beta_{n}}))^{2}}{2}\biggr) \\ &{}-\frac{1}{\sqrt{2\pi}}\bigl(\log \bigl(\alpha_{n}x^{\beta_{n}} \bigr)\bigr)^{-3}q_{n}\bigl(\alpha_{n}x^{\beta_{n}} \bigr)\exp\biggl(-\frac {(\log (\alpha_{n}x^{\beta_{n}}))^{2}}{2}\biggr) \biggr)+\frac{1}{x} \\ =&\frac{1}{x}\bigl(1+\beta^{2}_{n}\log x \bigr)^{-1} \biggl(-\bigl(1-(\log\alpha_{n})^{-2}q_{n} \bigl(\alpha_{n}x^{\beta _{n}}\bigr) \bigl(1+\beta^{2}_{n} \log x\bigr)^{-2}\bigr) \\ &{}\times\exp\biggl(-\frac{1}{2}\beta^{2}_{n} \log^{2}x\biggr)+1+\beta^{2}_{n}\log x \biggr) \\ =&\frac{1}{x}\bigl(1+\beta^{2}_{n}\log x \bigr)^{-1}Q_{n}(x), \end{aligned}$$

where \(0< q_{n}(x)<1\) and

$$ Q_{n}(x)=- \bigl(1-\beta^{2}_{n} q_{n} \bigl(\alpha_{n}x^{\beta_{n}}\bigr) \bigl(1+\beta^{2}_{n} \log x\bigr)^{-2} \bigr)\exp\biggl(-\frac{1}{2} \beta^{2}_{n}\log^{2}x\biggr)+1+ \beta^{2}_{n}\log x. $$

Since \(e^{-x}>1-x\), as \(x>0\), we have

$$\begin{aligned} Q_{n}(x)&< -\bigl(1-\beta^{2}_{n} q_{n} \bigl(\alpha_{n}x^{\beta_{n}}\bigr) \bigl(1+\beta^{2}_{n} \log x\bigr)^{-2}\bigr) \biggl(1-\frac{1}{2}\beta^{2}_{n} \log^{2}x\biggr)+1+\beta^{2}_{n}\log x \\ &< \beta^{2}_{n}\biggl(\bigl(1+\beta^{2}_{n} \log x\bigr)^{-2}+\frac{1}{2}\log^{2}x\biggr). \end{aligned}$$

But

$$\begin{aligned} Q_{n}(x)&>\beta^{2}_{n} q_{n}\bigl( \alpha_{n}x^{\beta_{n}}\bigr) \bigl(1+\beta^{2}_{n} \log x\bigr)^{-2}+\beta^{2}_{n}\log x \\ &>\beta^{2}_{n}\log x. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \bigl\vert Q_{n}(x)\bigr\vert &< \beta^{2}_{n} \biggl(\bigl(1+\beta^{2}_{n}\log x\bigr)^{-2}+ \frac{1}{2}\log^{2}x+\vert \log x\vert \biggr) \\ &< \beta^{2}_{n} \biggl(\biggl(1-\frac{\log(2\log\log\alpha_{n})}{\log ^{2}\alpha_{n}} \biggr)^{-2}+\frac{1}{2}\log^{2}x+\vert \log x\vert \biggr) \\ &< \beta^{2}_{n}\biggl(\tilde{c}_{8}+ \frac{1}{2}\log^{2}x+\vert \log x\vert \biggr) \end{aligned}$$

for \(n>n_{0}\), where \(c_{n}\leq x<1\). Therefore,

$$\begin{aligned} \biggl\vert -n\Psi_{n}(x)+\frac{1}{x}\biggr\vert &< \beta^{2}_{n}\biggl(\tilde{c}_{8}+ \frac {1}{2}\log^{2}x+\vert \log x\vert \biggr)x^{-1} \bigl(1+\beta^{2}_{n}\log x\bigr)^{-1} \\ &< \beta^{2}_{n}\biggl(\tilde{c}_{8}+ \frac{1}{2}\log^{2}c_{n}+\vert \log c_{n} \vert \biggr)c^{-1}_{n}\bigl(1+\beta^{2}_{n} \log c_{n}\bigr)^{-1} \\ &< \tilde{c}_{9} \end{aligned}$$

for \(n\geq n_{0}\). Thus, there exists a positive number θ satisfying \(0<\theta<1\) such that

$$\begin{aligned} \Phi_{1}(x)\bigl\vert A_{n}(x)-1\bigr\vert &< \Phi_{1}(x)\exp\biggl(\theta\biggl(-n\Psi_{n}(x)+ \frac {1}{x}\biggr)\biggr)\biggl\vert -n\Psi_{n}(x)+ \frac{1}{x}\biggr\vert \\ &< \exp(\tilde{c}_{9})\beta^{2}_{n}\sup _{c_{n}\leq x< 1}\biggl\vert \biggl(\tilde{c}_{8}+ \frac{1}{2}\log^{2}x+\vert \log x\vert \biggr)x^{-1} \biggr\vert \bigl(1+\beta ^{2}_{n}\log c_{n} \bigr)^{-1} \\ &< \tilde{c}_{10}\beta^{2}_{n}. \end{aligned}$$
(4.11)

By (4.9) and (4.11), the proof of (4.6) is complete.

Third, consider the circumstance of \(0< x< c_{n}\). In this case

$$\Phi_{1}(x)< \Phi_{1}(c_{n})= \beta^{2}_{n}, $$

we have

$$\begin{aligned} \sup_{0< x< c_{n}}\bigl\vert F^{n}\bigl( \alpha_{n}x^{\beta_{n}}\bigr)-\Phi _{1}(x)\bigr\vert &< F^{n}\bigl(\alpha_{n}c^{\beta_{n}}_{n}\bigr)+ \Phi_{1}(c_{n}) \\ &< \sup_{c_{n}< x< 1}\bigl\vert F^{n}\bigl( \alpha_{n}x^{\beta_{n}}\bigr)-\Phi_{1}(x)\bigr\vert +2 \Phi _{1}(c_{n}) \\ &< (\tilde{c}_{5}+\tilde{c}_{10})\beta^{2}_{n}+ \beta^{2}_{n} \\ &< \tilde{c}_{11}\beta^{2}_{n}. \end{aligned}$$

The proof of Theorem 3.1 is finished. □