Skip to main content
Log in

On consistency of the weighted least squares estimators in a semiparametric regression model

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

This paper is concerned with the semiparametric regression model \(y_i=x_i\beta +g(t_i)+\sigma _ie_i,~~i=1,2,\ldots ,n,\) where \(\sigma _i^2=f(u_i)\), \((x_i,t_i,u_i)\) are known fixed design points, \(\beta \) is an unknown parameter to be estimated, \(g(\cdot )\) and \(f(\cdot )\) are unknown functions, random errors \(e_i\) are widely orthant dependent random variables. The p-th (\(p>0\)) mean consistency and strong consistency for least squares estimators and weighted least squares estimators of \(\beta \) and g under some more mild conditions are investigated. A simulation study is also undertaken to assess the finite sample performance of the results that we established. The results obtained in the paper generalize and improve some corresponding ones of negatively associated random variables.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Aneiros G, Quintela A (2001) Asymptotic properties in partial linear models under dependence. TEST 10:333–355

    Article  MathSciNet  MATH  Google Scholar 

  • Asadian N, Fakoor V, Bozorgnia A (2006) Rosenthal’s type inequalities for negatively orthant dependent random variables. J Iran Stat Soc 5(1–2):66–75

    MATH  Google Scholar 

  • Baek J, Liang H (2006) Asymptotics of estimators in semiparametric model under NA samples. J Stat Plan Inference 136:3362–3382

    Article  MATH  Google Scholar 

  • Chen H (1988) Convergence rates for parametric components in a partly linear model. Ann Stat 16:136–146

    Article  MathSciNet  MATH  Google Scholar 

  • Chen MH, Ren Z, Hu SH (1998) Strong consistency of a class of estimators in partial linear model. Acta Math Sin 41(2):429–439

    MathSciNet  MATH  Google Scholar 

  • Chen W, Wang YB, Cheng DY (2016) An inequality of widely dependent random variables and its applications. Lith Math J 56(1):16–31

    Article  MathSciNet  MATH  Google Scholar 

  • Engle RF, Granger CWJ, Weiss RJ (1986) Nonparametric estimates of the relation weather and electricity sales. J Am Stat Assoc 81(394):310–320

    Article  Google Scholar 

  • Gao JT (1992) Consistency of estimation in a semiparametric regression model (I). J Syst Sci Math Sci 12(3):269–272

    MathSciNet  MATH  Google Scholar 

  • Gao JT, Chen XR, Zhao LC (1994) Asymptotic normality of a class of estimators in partial linear models. Acta Math Sin 37(2):256–268

    MathSciNet  MATH  Google Scholar 

  • Hamilton SA, Truong YK (1997) Local linear estimation in partly linear models. J Multivar Anal 60:1–19

    Article  MathSciNet  MATH  Google Scholar 

  • He W, Cheng DY, Wang YB (2013) Asymptotic lower bounds of precise large deviations with nonnegative and dependent random variables. Stat Probab Lett 83:331–338

    Article  MathSciNet  MATH  Google Scholar 

  • Hong SY (1991) Estimate for a semiparametric regression model. Sci China Math 12A:1258–1272

    Google Scholar 

  • Hu SH (1999) Estimate for a semiparametric regression model. Acta Math Sci 19A(5):541–549

    MATH  Google Scholar 

  • Hu SH (2006) Fixed-design semiparametric regression for linear time series. Acta Math Sci 26B(1):74–82

    Article  MathSciNet  MATH  Google Scholar 

  • Hu TZ (2000) Negatively superadditive dependence of random variables with applications. Chin J Appl Probab Stat 16:133–144

    MathSciNet  MATH  Google Scholar 

  • Joag-Dev K, Proschan F (1983) Negative association of random variables with applications. Ann Stat 11(1):286–295

    Article  MathSciNet  MATH  Google Scholar 

  • Liu L (2009) Precise large deviations for dependent random variables with heavy tails. Stat Probab Lett 79:1290–1298

    Article  MathSciNet  MATH  Google Scholar 

  • Liu XJ, Gao QW, Wang YB (2012) A note on a dependent risk model with constant interest rate. Stat Probab Lett 8(4):707–712

    Article  MathSciNet  MATH  Google Scholar 

  • Mammen E, Van de Geer S (1997) Penalized quasi-likelihood estimation in partial linear models. Ann Stat 25:1014–1035

    Article  MathSciNet  MATH  Google Scholar 

  • Pan GM, Hu SH, Fang LB, Cheng ZD (2003) Mean consistency for a semiparametric regression model. Acta Math Sci 23A(5):598–606

    MathSciNet  MATH  Google Scholar 

  • Shen AT (2013a) Bernstein-type inequality for widely dependent sequence and its application to nonparametric regression models. Abstr Appl Anal 2013:9 (Article ID 862602)

  • Shen AT (2013b) On the strong convergence rate for weighted sums of arrays of rowwise negatively orthant dependent random variables. RACSAM 107(2):257–271

  • Shen AT, Zhang Y, Volodin A (2015) Applications of the Rosenthal-type inequality for negatively superadditive dependent random variables. Metrika 78:295–311

    Article  MathSciNet  MATH  Google Scholar 

  • Shen AT, Yao M, Wang WJ, Volodin A (2016) Exponential probability inequalities for WNOD random variables and their applications. RACSAM 110(1):251–268

    Article  MathSciNet  MATH  Google Scholar 

  • Speckman P (1988) Kernel smoothing in partial linear models. J R Stat Soc Ser B 50:413–436

    MathSciNet  MATH  Google Scholar 

  • Volodin A (2002) On the Kolmogorov exponential inequality for negatively dependent random variables. Pak J Stat 18(2):249–253

    MathSciNet  MATH  Google Scholar 

  • Wang KY, Wang YB, Gao QW (2013) Uniform asymptotics for the finite-time ruin probability of a new dependent risk model with a constant interest rate. Methodol Comput Appl Probab 15(1):109–124

    Article  MathSciNet  MATH  Google Scholar 

  • Wang SJ, Wang XJ (2013) Precise large deviations for random sums of END real-valued random variables with consistent variation. J Math Anal Appl 402:660–667

    Article  MathSciNet  MATH  Google Scholar 

  • Wang XJ, Xu C, Hu TC, Volodin A, Hu SH (2014) On complete convergence for widely orthant-dependent random variables and its applications in nonparametrics regression models. TEST 23(3):607–629

    Article  MathSciNet  MATH  Google Scholar 

  • Wang XJ, Hu SH (2015a) The consistency of the nearest neighbor estimator of the density function based on WOD samples. J Math Anal Appl 429(1):497–512

    Article  MathSciNet  MATH  Google Scholar 

  • Wang XJ, Zheng LL, Xu C, Hu SH (2015b) Complete consistency for the estimator of nonparametric regression models based on extended negatively dependent errors. Stat J Theor Appl Stat 49(2):396–407

    MathSciNet  MATH  Google Scholar 

  • Zhou XC, Hu SH (2010) Moment consistency of estimators in semiparametric regression model under NA samples. Pure Appl Math 6(2):262–269

    MATH  Google Scholar 

  • Zhou XC, Lin JG (2013) Asymptotic properties of wavelet estimators in semiparametric regression models under dependent errors. J Multivar Anal 122:251–270

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the Referee for carefully reading the manuscript and for providing helpful comments and constructive criticism which enabled them to improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuejun Wang.

Additional information

Supported by the National Natural Science Foundation of China (11671012, 11501004, 11501005), the Natural Science Foundation of Anhui Province (1508085J06) and the Key Projects for Academic Talent of Anhui Province (gxbjZD2016005).

Appendix

Appendix

Lemma A.1

Let \(p>0\) and \(\{X_n,n\ge 1\}\) be a sequence of zero mean WOD random variables with dominating coefficient h(n), which is stochastically dominated by a random variable X. Assume that \(\{a_{ni}(\cdot ), 1\le i\le n, n\ge 1\}\) is a function array defined on compact set A satisfying

$$\begin{aligned} \max _{1\le j\le n}\sum _{i=1}^n |a_{ni}(z_j)|=O(1) \end{aligned}$$
(3.15)

and

$$\begin{aligned} \max _{1\le i,j \le n}|a_{ni}(z_j)|=O(n^{-\alpha }(h(n))^{-\beta }),~\exists ~\alpha >0,~\beta \ge 1. \end{aligned}$$
(3.16)

If \(EX^2<\infty \) for \(0<p\le 2\), then

$$\begin{aligned} \lim _{n\rightarrow \infty }\max _{1\le j\le n}E\left| \sum _{i=1}^na_{ni}(z_j)X_i\right| ^p=0. \end{aligned}$$
(3.17)

If \(E|X|^p<\infty \) for \(p>2\), then (3.17) still holds.

Remark A.1

Lemma A.1 also holds when the moment condition \(EX^2<\infty \) is changed to \(\sup _{i}EX_i^2<\infty \), \(E|X|^p<\infty \) is changed to \(\sup _{i}E|X_i|^p<\infty \) and the condition of stochastic domination is deleted. Under the similar modification, Theorem 2.1 also holds true.

Proof of Lemma A.1

Without loss of generality, we can assume that \(a_{ni}(z_j)>0\).

If \(0<p\le 2\), by Jensen’s inequality, Marcinkiewicz-Zygmund-type inequality (one can refer to Wang et al. (2014) for instance), (3.15), (3.16) and \(EX^2<\infty \), we have

$$\begin{aligned}&\max _{1\le j\le n}E\left| \sum _{i=1}^na_{ni}(z_j)X_i\right| ^p \\&\quad \le C\left( EX^2\right) ^{p/2}\left( h(n)\max _{1\le i,j\le n}a_{ni}(z_j)\right) ^{p/2}\left( \max _{1\le j\le n}\sum _{i=1}^na_{ni}(z_j)\right) ^{p/2}\\&\quad \le Cn^{-\alpha p/2}(h(n))^{(1-\beta )p/2}\rightarrow 0,~~n\rightarrow \infty . \end{aligned}$$

If \(p>2\), we denote

$$\begin{aligned} X_{ni}^{j}=n^{1/p}(h(n))^{\beta /p}a_{ni}(z_j)X_i, \end{aligned}$$

thus, we only need to prove

$$\begin{aligned} \frac{1}{n(h(n))^{\beta }}\max _{1\le j\le n}E\left| \sum _{i=1}^nX_{ni}^j\right| ^p\rightarrow 0,~~n\rightarrow \infty . \end{aligned}$$

For any \(t>0\), denote

$$\begin{aligned}&Y_{ni}^j=-t^{1/p}I(X_{ni}^j<-t^{1/p})+X_{ni}^jI(|X_{ni}^j|\le t^{1/p})+t^{1/p}I(X_{ni}^j>t^{1/p}),\\&Z_{ni}^j=(X_{ni}^j+t^{1/p})I(X_{ni}^j<-t^{1/p}) +(X_{ni}^j-t^{1/p})I(X_{ni}^j>t^{1/p}). \end{aligned}$$

For fixed \(t>0\) and \(1 \le j\le n\), we can see that \(\{Y_{ni}^j,1\le i\le n, n\ge 1\}\) and \(\{Z_{ni}^j,1\le i\le n, n\ge 1\}\) are both arrays of rowwise WOD random variables. Noting that \(X_{ni}^j=Y_{ni}^j-EY_{ni}^j+Z_{ni}^j-EZ_{ni}^j\), we have

$$\begin{aligned}&\frac{1}{n(h(n))^{\beta }}\max _{1\le j\le n}E\left| \sum _{i=1}^nX_{ni}^j\right| ^p\nonumber \\&\quad =\frac{1}{n(h(n))^{\beta }}\max _{1\le j\le n}\left[ \int _{0}^{n\varepsilon }P\left( \left| \sum _{i=1}^nX_{ni}^j\right| ^p>t\right) dt\right. \nonumber \\&\left. \qquad +\int _{n\varepsilon }^{\infty }P\left( \left| \sum _{i=1}^nX_{ni}^j\right| ^p>t\right) dt\right] \nonumber \\&\quad \le \varepsilon +\frac{1}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }P\left( \left| \sum _{i=1}^n\left( Y_{ni}^j-E Y_{ni}^j\right) \right|>t^{1/p}/2\right) dt\nonumber \\&\qquad +\frac{1}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }P\left( \left| \sum _{i=1}^n\left( Z_{ni}^j-E Z_{ni}^j\right) \right| >t^{1/p}/2\right) dt\nonumber \\&\quad \doteq \varepsilon +I_1+I_2. \end{aligned}$$
(3.18)

First, we prove \(I_2\rightarrow 0,~n\rightarrow \infty \). Note that

$$\begin{aligned} \max _{1\le j\le n}\max _{t>n\varepsilon }\left| t^{-1/p}\sum _{i=1}^nEZ_{ni}^j\right|\le & {} C n^{-1} \max _{1\le j\le n}\sum _{i=1}^nE|X_{ni}^j|^pI\left( |X_{ni}^j|>(n\varepsilon )^{1/p}\right) \\\le & {} C (h(n))^{\beta }\max _{1\le j\le n}\sum _{i=1}^na_{ni}^p(z_j)E|X_i|^p\\\le & {} CE|X|^p(h(n))^{\beta }\max _{1\le i,j\le n}a_{ni}^{p-1}(z_j)\max _{1\le j\le n}\sum _{i=1}^na_{ni}(z_j)\\\le & {} Cn^{-\alpha (p-1)}(h(n))^{-\beta (p-2)}E|X|^p\rightarrow 0,~~n\rightarrow \infty . \end{aligned}$$

Hence, for any \(t>n\varepsilon \) and all n large enough, we have \(\max _{1\le j\le n}\left| \sum \nolimits _{i=1}^nEZ_{ni}^j\right| \le t^{1/p}/4\), which implies that for all n large enough,

$$\begin{aligned} I_2\le & {} \frac{1}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }P\left( \left| \sum _{i=1}^nZ_{ni}^j\right|>t^{1/p}/4\right) dt\nonumber \\\le & {} \frac{1}{n(h(n))^{\beta }}\max _{1\le j\le n}\sum _{i=1}^n\int _{n\varepsilon }^{\infty }P\left( |X_{ni}^j|>t^{1/p}\right) dt\nonumber \\\le & {} \frac{1}{n(h(n))^{\beta }}\max _{1\le j\le n}\sum _{i=1}^nE|X_{ni}^j|^pI(|X_{ni}^j|^p>n\varepsilon )\nonumber \\\le & {} \max _{1\le j\le n}\sum _{i=1}^na_{ni}^p(z_j)E|X_i|^p\le CE|X|^p\max _{1\le i,j\le n}a_{ni}^{p-1}(z_j)\max _{1\le j\le n}\sum _{i=1}^na_{ni}(z_j)\nonumber \\\le & {} CE|X|^pn^{-\alpha (p-1)}(h(n))^{-\beta (p-1)}\rightarrow 0,~~n\rightarrow \infty . \end{aligned}$$
(3.19)

Next, we will show that \(I_1\rightarrow 0,~n\rightarrow \infty \). Taking \(q>p\), we have by Markov’s inequality and Rosenthal-type inequality (one can refer to Wang et al. (2014) for instance) that

$$\begin{aligned} I_1\le & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}E\left| \sum _{i=1}^n\left( Y_{ni}^j-E Y_{ni}^j\right) \right| ^qdt\nonumber \\\le & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}\sum _{i=1}^nE|Y_{ni}^j|^qdt\nonumber \\&\quad +\frac{C h(n)}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}\left( \sum _{i=1}^nE\left( Y_{ni}^j\right) ^2\right) ^{q/2}dt\nonumber \\\doteq & {} I_{11}+I_{12}. \end{aligned}$$
(3.20)

According to the definition of \(Y_{ni}^j\), we have

$$\begin{aligned} I_{11}\le & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}\sum _{i=1}^nE|X_{ni}^j|^qI\left( |X_{ni}^j|\le t^{1/p}\right) dt\nonumber \\&+\,\frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }\sum _{i=1}^nP\left( |X_{ni}^j|>t^{1/p}\right) dt\nonumber \\\doteq & {} I_{111}+I_{112}. \end{aligned}$$
(3.21)

In view of the proof of \(I_2\), we can get that \(I_{112}\rightarrow 0,~n\rightarrow \infty \). Next, we estimate the limit of \(I_{111}\) as \(n\rightarrow \infty \). It is easy to check that

$$\begin{aligned} I_{111}\le & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}\sum _{i=1}^nE|X_{ni}^j|^qI\left( |X_{ni}^j|^p\le (n+1)\varepsilon \right) dt\nonumber \\&+\,\frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}\sum _{i=1}^nE|X_{ni}^j|^qI\left( (n+1)\varepsilon<|X_{ni}^j|^p\le t\right) dt\nonumber \\= & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}\sum _{i=1}^nE|X_{ni}^j|^qI\left( |X_{ni}^j|^p\le (n+1)\varepsilon \right) dt\nonumber \\&+\,\frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{(n+1)\varepsilon }^{\infty }t^{-q/p}\sum _{i=1}^nE|X_{ni}^j|^qI\left( (n+1)\varepsilon <|X_{ni}^j|^p\le t\right) dt\nonumber \\\doteq & {} I_{111}^{'}+I_{111}^{''}. \end{aligned}$$
(3.22)

Similar to the proof of (3.19), we have

$$\begin{aligned} I_{111}^{'}\le & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\sum _{i=1}^nE|X_{ni}^j|^pI\left( |X_{ni}^j|^p\le (n+1)\varepsilon \right) \nonumber \\\le & {} C\max _{1\le j\le n}\sum _{i=1}^na_{ni}^p(z_j)E|X_i|^p\rightarrow 0,~~n\rightarrow \infty , \end{aligned}$$
(3.23)

and

$$\begin{aligned} I_{111}^{''}= & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\sum _{m=n+1}^{\infty }\int _{m\varepsilon }^{(m+1)\varepsilon }t^{-q/p}\sum _{i=1}^nE|X_{ni}^j|^qI((n+1)\varepsilon<|X_{ni}^j|^p\nonumber \\\le & {} t)dt\nonumber \\\le & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\sum _{m=n+1}^{\infty }m^{-q/p}\sum _{i=1}^nE|X_{ni}^j|^qI((n+1)\varepsilon<|X_{ni}^j|^p\le (m+1)\varepsilon )\nonumber \\= & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\sum _{m=n+1}^{\infty }m^{-q/p}\sum _{i=1}^n\sum _{k=n+1}^{m}E|X_{ni}^j|^qI(k\varepsilon<|X_{ni}^j|^p\le (k+1)\varepsilon )\nonumber \\\le & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\sum _{i=1}^n\sum _{k=n+1}^{\infty }k^{1-q/p}E|X_{ni}^j|^qI(k\varepsilon <|X_{ni}^j|^p\le (k+1)\varepsilon )\nonumber \\\le & {} \frac{C}{n(h(n))^{\beta }}\max _{1\le j\le n}\sum _{i=1}^nE|X_{ni}^j|^p\rightarrow 0,~~n\rightarrow \infty , \end{aligned}$$
(3.24)

which imply that \(I_{111}\rightarrow 0,~n\rightarrow \infty \). Noting that \(p>2\), \(\beta \ge 1\) and \(EX^2<\infty \), we have

$$\begin{aligned} I_{12}\le & {} \frac{C h(n)}{n(h(n))^{\beta }}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}\left( \sum _{i=1}^nE\left( X_{ni}^j\right) ^2I\left( |X_{ni}^j|\le t^{1/p}\right) \right. \nonumber \\&\left. +\sum _{i=1}^nt^{2/p}P\left( |X_{ni}^j|>t^{1/p}\right) \right) ^{q/2}dt\nonumber \\\le & {} \frac{C}{n}\max _{1\le j\le n}\int _{n\varepsilon }^{\infty }t^{-q/p}\left( \sum _{i=1}^nE\left( X_{ni}^j\right) ^2\right) ^{q/2}dt\nonumber \\\le & {} \frac{C}{n}n^{1-q/p}\max _{1\le j\le n}\left( \sum _{i=1}^nE\left( n^{1/p}(h(n))^{\beta /p}a_{ni}(z_j)X_i\right) ^2\right) ^{q/2}\nonumber \\\le & {} C(EX^2)^{q/2}n^{-\alpha q/2}(h(n))^{(1/p-1/2)\beta q}\rightarrow 0,~~n\rightarrow \infty . \end{aligned}$$
(3.25)

The proof is completed. \(\square \)

Lemma A.2

Let \(\{X_n,n\ge 1\}\) be a sequence of zero mean WOD random variables with dominating coefficient h(n), which is stochastically dominated by a random variable X. Assume that \(\{a_{ni}(\cdot ), 1\le i\le n, n\ge 1\}\) is a function array defined on compact set A satisfying

$$\begin{aligned} \max _{1\le j\le n}\sum _{i=1}^n |a_{ni}(z_j)|=O(1) \end{aligned}$$
(3.26)

and

$$\begin{aligned} \max _{1\le i,j\le n}|a_{ni}(z_j)|=O(n^{-\alpha }(h(n))^{-\beta }),~\exists ~\alpha>0,~\beta >0. \end{aligned}$$
(3.27)

If \(EX^2<\infty \) and \(\sum \nolimits _{i=1}^{n} i^{-\alpha }(h(i))^{-\beta }=O(n^{\alpha })\) for some \(\alpha >0\) and \(\beta >0\), then

$$\begin{aligned} \max _{1\le j\le n}\left| \sum _{i=1}^na_{ni}(z_j)X_i\right| \rightarrow 0~~a.s.,~n\rightarrow \infty . \end{aligned}$$
(3.28)

Proof

Without loss of generality, we can assume that \(a_{ni}(z_j)>0\).

For any \(\varepsilon >0\), choose \(0<\delta <\alpha /2\) and large \(N\ge 1\), which will be specialized later. Denote \(X_{ni}(j)=a_{ni}(z_j)X_i\), and

$$\begin{aligned} Y_{ni}^{(1)}(j)= & {} -n^{-\delta }(h(n))^{-\beta /4}I\left( X_{ni}(j)<-n^{-\delta }(h(n))^{-\beta /4}\right) \\&+\,X_{ni}(j)I\left( |X_{ni}(j)|\le n^{-\delta }(h(n))^{-\beta /4}\right) \\&+\,n^{-\delta }(h(n))^{-\beta /4}I\left( X_{ni}(j)>n^{-\delta }(h(n))^{-\beta /4}\right) ,\\ Y_{ni}^{(2)}(j)= & {} \left( X_{ni}(j)+n^{-\delta }(h(n))^{-\beta /4}\right) I\left( X_{ni}(j)\le -\frac{\varepsilon }{N}(h(n))^{-\beta /4}\right) \\&+\,\left( X_{ni}(j)-n^{-\delta }(h(n))^{-\beta /4}\right) I\left( X_{ni}(j)\ge \frac{\varepsilon }{N}(h(n))^{-\beta /4}\right) ,\\ Y_{ni}^{(3)}(j)= & {} \left( X_{ni}(j)-n^{-\delta }(h(n))^{-\beta /4}\right) I\left( n^{-\delta }(h(n))^{-\beta /4}\le X_{ni}(j)<\frac{\varepsilon }{N}(h(n))^{-\beta /4}\right) ,\\ Y_{ni}^{(4)}(j)= & {} \left( X_{ni}(j)+n^{-\delta }(h(n))^{-\beta /4}\right) I\left( -\frac{\varepsilon }{N}(h(n))^{-\beta /4}\right) <X_{ni}(j)\\\le & {} -n^{-\delta }\left( h(n))^{-\beta /4}\right) . \end{aligned}$$

Then

$$\begin{aligned} \max _{1\le j\le n}\left| \sum _{i=1}^n a_{ni}(z_j)X_i\right|\le & {} \max _{1\le j\le n}\left| \sum _{i=1}^n Y_{ni}^{(1)}(j)\right| +\max _{1\le j\le n}\left| \sum _{i=1}^n Y_{ni}^{(2)}(j)\right| \nonumber \\&+\,\max _{1\le j\le n}\left| \sum _{i=1}^n Y_{ni}^{(3)}(j)\right| +\max _{1\le j\le n}\left| \sum _{i=1}^n Y_{ni}^{(4)}(j)\right| \nonumber \\\doteq & {} J_1+J_2+J_3+J_4. \end{aligned}$$
(3.29)

To prove (3.28), it suffices to show \(J_i\rightarrow 0~a.s.,~n\rightarrow \infty ,~i=1,2,3,4\). We first prove \(J_1\rightarrow 0~a.s.\), \(n\rightarrow \infty \). For each j, we know that \(\{Y_{ni}^{(1)}(j),1\le i\le n,n\ge 1\}\) is still an array of rowwise WOD random variables. In view of \(EX_i=0\), (3.26), (3.27) and \(EX^2<\infty \), we get

$$\begin{aligned}&\max _{1\le j\le n}\left| \sum _{i=1}^n E Y_{ni}^{(1)}(j)\right| \\&\quad \le \max _{1\le j\le n}\sum _{i=1}^n\left[ E|X_{ni}(j)|I\left( |X_{ni}(j)|> n^{-\delta }(h(n))^{-\beta /4}\right) \right. \\&\qquad +\left. n^{-\delta }(h(n))^{-\beta /4}P\left( |X_{ni}(j)|> n^{-\delta }(h(n))^{-\beta /4}\right) \right] \\&\quad \le 2\max _{1\le j\le n}\sum _{i=1}^nE|X_{ni}(j)|I\left( |X_{ni}(j)|> n^{-\delta }(h(n))^{-\beta /4}\right) \\&\quad \le C\max _{1\le j\le n}n^{\delta }(h(n))^{\beta /4}\sum _{i=1}^nE|X_{ni}(j)|^2I\left( |X_{ni}(j)|> n^{-\delta }(h(n))^{\beta }\right) \\&\quad \le Cn^{\delta }(h(n))^{\beta /4}\cdot \max _{1\le j\le n}\sum _{i=1}^na_{ni}(z_j)\cdot \max _{1\le i,j\le n}a_{ni}(z_j)\cdot EX^2\\&\quad \le Cn^{\delta -\alpha }(h(n))^{-3\beta /4}EX^2\rightarrow 0,~n\rightarrow \infty . \end{aligned}$$

Hence, for all n large enough, \(\max \limits _{1\le j\le n}\left| \sum \nolimits _{i=1}^n E Y_{ni}^{(1)}(j)\right| <\frac{\varepsilon }{2}\). Applying Markov’s inequality and Rosenthal-type inequality, and taking

$$\begin{aligned} q>\max \left\{ \frac{2(\delta +1)-\alpha }{\delta },\frac{4}{\alpha },\frac{2}{\beta },2\right\} , \end{aligned}$$

we have

$$\begin{aligned}&\sum _{n=1}^\infty P\left( \max _{1\le j\le n}\left| \sum _{i=1}^n Y_{ni}^{(1)}(j)\right|>\varepsilon \right) \nonumber \\&\quad \le C\sum _{n=1}^\infty P\left( \max _{1\le j\le n}\left| \sum _{i=1}^n\left( Y_{ni}^{(1)}(j)-E Y_{ni}^{(1)}(j)\right) \right|>\frac{\varepsilon }{2}\right) \nonumber \\&\quad \le C\sum _{n=1}^\infty \sum _{j=1}^nP\left( \left| \sum _{i=1}^n\left( Y_{ni}^{(1)}(j)-E Y_{ni}^{(1)}(j)\right) \right| >\frac{\varepsilon }{2}\right) \nonumber \\&\quad \le C\sum _{n=1}^\infty \sum _{j=1}^n\left[ \sum _{i=1}^nE\left| Y_{ni}^{(1)}(j)\right| ^q+h(n)\sum _{i=1}^n\left( E\left| Y_{ni}^{(1)}(j)\right| ^2\right) ^{q/2}\right] \nonumber \\&\quad \doteq J_{11}+J_{12}. \end{aligned}$$
(3.30)

Note that

$$\begin{aligned} J_{11}\le & {} \sum _{n=1}^\infty \sum _{j=1}^n\sum _{i=1}^n\left[ n^{-\delta q}(h(n))^{-\frac{\beta q}{4}}P\left( |X_{ni}(j)|>n^{-\delta }(h(n))^{-\frac{\beta }{4}}\right) \right. \nonumber \\&+\,\left. E|X_{ni}(j)|^qI\left( |X_{ni}(j)|\le n^{-\delta }(h(n))^{-\frac{\beta }{4}}\right) \right] \nonumber \\\le & {} C\sum _{n=1}^\infty \sum _{j=1}^n\sum _{i=1}^n n^{-\delta (q-2)}(h(n))^{-\beta (q-2)/4}E|X_{ni}(j)|^2\nonumber \\\le & {} CEX^2\sum _{n=1}^\infty n^{1-\alpha -\delta (q-2)}(h(n))^{-\beta (q+2)/4}<\infty , \end{aligned}$$
(3.31)

and

$$\begin{aligned} J_{12}\le & {} C\sum _{n=1}^\infty \sum _{j=1}^nh(n)\left( \sum _{i=1}^n E|X_{ni}(j)|^2\right) ^{q/2}\nonumber \\\le & {} C(EX^2)^{q/2}\sum _{n=1}^\infty n^{1-\alpha q/2}(h(n))^{1-\beta q/2}<\infty . \end{aligned}$$
(3.32)

We can see that \(J_1\rightarrow 0~a.s.\), \(n\rightarrow \infty \) by (3.30)–(3.32) and the Borel–Cantelli Lemma.

Next we turn to estimate \(J_2\). It follows from (3.27) that

$$\begin{aligned} \max _{1\le j\le n}\left| \sum _{i=1}^n Y_{ni}^{(2)}(j)\right|\le & {} C\max _{1\le j\le n}\sum _{i=1}^n \left| X_{ni}(j)\right| I\left( \left| X_{ni}(j)\right| \ge \frac{\varepsilon }{N}(h(n))^{-\beta /4}\right) \nonumber \\\le & {} Cn^{-\alpha }(h(n))^{-\beta }\sum _{i=1}^n|X_i|I\left( \left| X_{i}\right| \ge Cn^{\alpha }(h(n))^{\beta }(h(n))^{-\beta /4}\right) \nonumber \\\le & {} Cn^{-\alpha }(h(n))^{-\beta }\sum _{i=1}^n|X_i|I(\left| X_{i}\right| \ge Ci^{\alpha }). \end{aligned}$$
(3.33)

Hence, to prove \(J_2\rightarrow 0~a.s.,~n\rightarrow \infty \), we only need to show

$$\begin{aligned} \sum _{i=1}^{\infty } i^{-\alpha }(h(i))^{-\beta }|X_i|I(\left| X_{i}\right| \ge Ci^{\alpha })<\infty ~a.s.. \end{aligned}$$
(3.34)

It can be checked by \(\sum \nolimits _{i=1}^{n} i^{-\alpha }(h(i))^{-\beta }=O(n^{\alpha })\) and \(EX^2<\infty \) that

$$\begin{aligned}&\sum _{i=1}^{\infty } i^{-\alpha }(h(i))^{-\beta }E|X_i|I(\left| X_{i}\right| \ge Ci^{\alpha })\nonumber \\&\quad \le C\sum _{i=1}^{\infty } i^{-\alpha }(h(i))^{-\beta }\sum _{n=i}^{\infty }E|X|I(Cn^{\alpha }\le |X|<C(n+1)^{\alpha })\nonumber \\&\quad \le C\sum _{n=1}^{\infty }n^{\alpha }E|X|I(Cn^{\alpha }\le |X|<C(n+1)^{\alpha }),\nonumber \\&\quad \le CEX^2<\infty , \end{aligned}$$
(3.35)

which implies that (3.34) holds. Consequently, according to (3.33), (3.34) and Kronecker’s lemma, \(J_2\rightarrow 0~a.s.\), \(n\rightarrow \infty \).

From the definition of \(Y_{ni}^{(3)}(j)\), we know that

$$\begin{aligned} 0\le Y_{ni}^{(3)}(j)<\frac{\varepsilon }{N}(h(n))^{-\beta /4}-n^{-\delta }(h(n))^{-\beta /4}<\frac{\varepsilon }{N}. \end{aligned}$$

Therefore, by taking \(N>\max \left\{ \frac{2}{\alpha -2\delta },\frac{2}{\beta }\right\} \), we have

$$\begin{aligned}&\sum _{n=1}^{\infty }P\left( \max _{1\le j\le n}\left| \sum _{i=1}^n Y_{ni}^{(3)}(j)\right| >\varepsilon \right) \\&\quad \le \sum _{n=1}^{\infty }\sum _{j=1}^nP\left( \text {there are at least N's nonzero}~Y_{ni}^{(3)}(j)\right) \\&\quad \le \sum _{n=1}^{\infty }\sum _{j=1}^n\sum _{1\le k_1<\cdots<k_N \le n}\\&\qquad P\left( X_{n,k_1}(j)\ge n^{-\delta }(h(n))^{-\beta /4},\ldots ,X_{n,k_N}(j)\ge n^{-\delta }(h(n))^{-\beta /4}\right) \\&\quad \le \sum _{n=1}^{\infty }\sum _{j=1}^n\sum _{1\le k_1<\cdots<k_N\le n}h(n)\prod _{i=1}^NP\left( X_{n,k_i}(j)\ge n^{-\delta }(h(n))^{-\beta /4}\right) \\&\quad \le \sum _{n=1}^{\infty }\sum _{j=1}^nh(n)\left( \sum _{i=1}^nP\left( |X_{ni}(j)|\ge n^{-\delta }(h(n))^{-\beta /4}\right) \right) ^N\\&\quad \le \sum _{n=1}^{\infty }\sum _{j=1}^nh(n)\left( \sum _{i=1}^nn^{2\delta }(h(n))^{\beta /2}E|X_{ni}(j)|^2\right) ^N\\&\quad \le C(EX^2)^N\sum _{n=1}^{\infty }n^{1-(\alpha -2\delta )N}(h(n))^{1-\beta N/2}<\infty . \end{aligned}$$

Hence, from the Borel–Cantelli lemma, we can obtain \(J_3\rightarrow 0~a.s.\) \(n\rightarrow \infty \). Note that

$$\begin{aligned} -\frac{\varepsilon }{N}<-\frac{\varepsilon }{N}(h(n))^{-\beta /4}+n^{-\delta /4}(h(n))^{-\beta }<Y_{ni}^{(4)}(j)\le 0 . \end{aligned}$$

Similar to the proof of \(J_3\), we have \(J_4\rightarrow 0~a.s.\) \(n\rightarrow \infty \). This completes the proof of lemma. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Deng, X. & Hu, S. On consistency of the weighted least squares estimators in a semiparametric regression model. Metrika 81, 797–820 (2018). https://doi.org/10.1007/s00184-018-0659-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-018-0659-y

Keywords

Mathematics Subject Classification

Navigation