Skip to main content
Log in

Robust estimation for spatial semiparametric varying coefficient partially linear regression

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This paper considers a varying coefficient partially linear regression with spatial data. A general formulation is used to treat mean regression, median regression, quantile regression and robust mean regression in one setting. The parametric estimators of the model are obtained through piecewise local polynomial approximation of the nonparametric coefficient functions. The local estimators of unknown coefficient functions are obtained by replacing the parameters in model with their estimators and using local linear approximations.The asymptotic distribution of the estimator of the unknown parameter vector is established. The asymptotic distributions of the estimators of the unknown coefficient functions at both interior and boundary points are also derived. Finite sample properties of our procedures are studied through Monte Carlo simulations. A real data example about spatial soil data is used to illustrate our proposed methodology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

Download references

Acknowledgments

The author thank anonymous referees for their valuable comments and suggestions, which improved the early version of this paper. This work was supported by National Natural Science Foundation of China (Grant no. 11071120).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tang Qingguo.

Appendix: Proofs

Appendix: Proofs

Since the proof of Theorem 3.3 is similar to that of Theorem 3.2, we only give the proofs of Theorems 3.1 and 3.2. Let \(C\) denote some positive constants not depending on \(m\) and \(n\), but which may assume different values at each appearance. Let

$$\begin{aligned}\begin{array}{l} b_{0r}=(\alpha _{r}(t_{1}),\cdots ,h_{0}^{s}\alpha _{r}^{(s)}(t_{1})/s!, \cdots , \alpha _{r}(t_{M_{N}}),\cdots ,h_{0}^{s}\alpha _{r}^{(s)}(t_{M_{N}})/s!)^{T}, \\ \tilde{G}_{kr}=(g_{kr}^{*}(t_{1}),\cdots ,h_{0}^{s}{g_{kr}^{*}}^{(s)}(t_{1})/s!, \cdots , g_{kr}^{*}(t_{M_{N}}),\cdots ,h_{0}^{s}{g_{kr}^{*}}^{(s)}(t_{M_{N}})/s!)^{T} \end{array}\end{aligned}$$

and \(b_{0}=(b_{01}^{T},\ldots ,b_{0d_{1}}^{T})^{T}\), \(\tilde{G}_{k}=(\tilde{G}_{k1}^{T}, \ldots ,\tilde{G}_{kd_{1}}^{T})^{T}\), \(\tilde{G}=(\tilde{G}_{1},\ldots ,\tilde{G}_{d_{2}})\). Set \(\tilde{g}(X_{ij},U_{ij})^{T}=B_{ij}^{T}\tilde{G}\), \(\tilde{Z}_{ij}=N^{-1/2}[Z_{ij}-g^{*}(X_{ij},U_{ij})]\), \(\theta _{1}=N^{1/2}(\beta -\beta _{0})\), \(\tilde{B}_{ij}=(Nh_{0})^{-1/2}B_{ij}\), \(\theta _{2}=(Nh_{0})^{1/2}[(b-b_{0})+\tilde{G}(\beta -\beta _{0})]\), \(\theta =(\theta _{1}^{T},\theta _{2}^{T})^{T}\), \(V_{ij}=(\tilde{Z}_{ij}^{T},\tilde{B}_{ij}^{T})^{T}\) and \(e_{ij}=X_{ij}^{T}\alpha (U_{ij})-B_{ij}^{T}b_{0}+[g^{*}(X_{ij},U_{ij}) -\tilde{g}(X_{ij},U_{ij})]^{T}(\beta -\beta _{0})\). We consider the following new optimization problem

$$\begin{aligned} \min _{\theta }\sum _{i=1}^{m}\sum _{j=1}^{n} [\rho (\varepsilon _{ij}+e_{ij}-V_{ij}^{T}\theta ) -\rho (\varepsilon _{ij}+e_{ij})]. \end{aligned}$$

Clearly, \(\hat{\theta }_{1}=N^{1/2}(\hat{\beta }-\beta _{0})\). Let \(S_{N}(\theta )\) denote the objective function above and

$$\begin{aligned}\begin{array}{l} \Gamma _{N}(\theta )=\sum _{i=1}^{m}\sum _{j=1}^{n} E([\rho (\varepsilon _{ij}+e_{ij}-V_{ij}^{T}\theta ) -\rho (\varepsilon _{ij}+e_{ij})]|X_{ij},Z_{ij},U_{ij}), \\ R_{N}(\theta )=S_{N}(\theta )-\Gamma _{N}(\theta )+ \sum _{i=1}^{m}\sum _{j=1}^{n}V_{ij}^{T}\theta \psi (\varepsilon _{ij}). \end{array} \end{aligned}$$

Then

$$\begin{aligned} S_{N}(\theta )=\Gamma _{N}(\theta )-\sum _{i=1}^{m}\sum _{j=1}^{n}V_{ij}^{T}\theta \psi (\varepsilon _{ij}) +R_{N}(\theta ). \end{aligned}$$
(6.1)

We first present several lemmas that are necessary to prove the theorems.

Lemma 6.1

Let \(\xi _{1}, \ldots , \xi _{n}\) be arbitrary scalar random variables such that \(\max _{1\le i\le n} E(|\xi _{i}|^{r})<\infty \) for some \(r\ge 1\). Then, we have

$$\begin{aligned} E\left( \max _{1\le i\le n}|\xi _{i}|\right) \le C_{r}n^{1/r}, \end{aligned}$$

where \(C_{r}\) is a constant depending only on \(r\) and \(\max _{1\le i\le n} E(|\xi _{i}|^{r})\).

Proof

See (van der Vaart and Wellner (1996), Lemma 2.2.2). \(\square \)

Lemma 6.2

Let \((R_{ij}: (i,j)\in \mathbf {Z}^{2})\) be a zero-mean real valued random fields such that \(\sup _{1\le i\le m,1\le j\le n}|R_{ij}|\le V_{0}<\infty \) for some positive constant \(V_{0}\). Then for each \(\tilde{q}=(\tilde{q}_{1},\tilde{q}_{2})\) with integer-valued coordinates \(\tilde{q}_{1}\in [1,m/2]\), \(\tilde{q}_{2}\in [1,n/2]\) and for each \(\varepsilon >0\) we have

$$\begin{aligned} P\left( |\sum _{i=1}^{m}\sum _{j=1}^{n}R_{ij}|>N\varepsilon \right) \le 4\left\{ 2\exp \left( -\frac{\varepsilon ^{2}q^{*}}{8v^{2}(\tilde{q})}\right) +\frac{4V_{0}}{\varepsilon } \varphi (\min (\tilde{p}_{1},\tilde{p}_{2})\right\} , \end{aligned}$$

where \(q^{*}=\tilde{q}_{1}\tilde{q}_{2}\), \(\tilde{p}=(\tilde{p}_{1},\tilde{p}_{2})\), \(\tilde{p}_{1}=m/(2\tilde{q}_{1})\), \(\tilde{p}_{2}=n/(2\tilde{q}_{2})\) and \(v^{2}(\tilde{q})=8\sigma ^{2}(\tilde{q})/{p^{*}}^{2}+V_{0}\varepsilon \) with \(p^{*}=\tilde{p}_{1}\tilde{p}_{2}\), \(\sigma ^{2}(\tilde{q})=\min _{\mathbf {i},\mathbf {j}}E(\sum _{(i,j)\in A_{\mathbf {i}\mathbf {j}}}R_{ij})^{2}\) and \(A_{\mathbf {i}\mathbf {j}}= \prod _{k=1}^{2}((i_{k}+2j_{k})\tilde{p}_{k}, (i_{k}+2j_{k}+1)\tilde{p}_{k}]\). The minimisation in the defining equation for \(\sigma ^{2}(\tilde{q})\) is taken over all pairs of \(2\)-tuple indices \(\mathbf {i}=(i_{1},i_{2})\) and \(\mathbf {j}=(j_{1},j_{2})\) with \(i_{k}=0,1\) and \(j_{k}=0, 1, \ldots , \tilde{q}_{k}- 1\).

Proof

The proof can be found in Lee et al. (2004). \(\square \)

Lemma 6.3

Under the Assumptions of Theorem 1, for any sufficient large \(L\), it holds that

$$\begin{aligned} \sup _{\Vert \theta \Vert \le L}|M_{N}^{-1}R_{N}(M_{N}^{1/2}\theta )|=o_{p}(1). \end{aligned}$$

Proof

Let \(T_{ij}(\theta )=\rho (\varepsilon _{ij}+e_{ij}-V_{ij}^{T}\theta )- \rho (\varepsilon _{ij}+e_{ij})+V_{ij}^{T}\theta \psi (\varepsilon _{ij})\) and \(R_{ij}(\theta )=T_{ij}(\theta )-E(T_{ij}(\theta )|X_{ij},Z_{ij},U_{ij})\). Then \(R_{N}(\theta )=\sum _{i=1}^{m}\sum _{j=1}^{n}R_{ij}(\theta )\).Observe that

$$\begin{aligned} T_{ij}(M_{N}^{1/2}\theta )=\int _{e_{ij}}^{e_{ij}-M_{N}^{1/2}V_{ij}^{T}\theta }[\psi (\varepsilon _{ij}+t)-\psi (\varepsilon _{ij})]dt. \end{aligned}$$

By Assumption 4 and Lemma 6.1, we get

$$\begin{aligned} \max _{ij}\Vert X_{ij}\Vert =O_{p}(N^{1/2\kappa }), \ \ \ \ \ \max _{ij}\Vert Z_{ij}\Vert =O_{p}(N^{1/2\kappa }). \end{aligned}$$
(6.2)

Hence, for any \(\epsilon >0\), there exists a sufficiently large constant \(L^{*}\) such that

$$\begin{aligned} P\left( \max _{ij}\Vert X_{ij}\Vert >L^{*}N^{1/2\kappa }\right) <\epsilon /4, \ \ \ \ P\left( \max _{ij}\Vert Z_{ij}\Vert >L^{*}N^{1/2\kappa }\right) <\epsilon /4.\qquad \end{aligned}$$
(6.3)

When \(\max _{ij}\Vert X_{ij}\Vert \le L^{*}N^{1/2\kappa }\) and \(\max _{ij}\Vert Z_{ij}\Vert \le L^{*}N^{1/2\kappa }\), by Assumption 5, for \(\theta \) such that \(\Vert \theta \Vert \le L\), it holds that

$$\begin{aligned} \max _{1\le i\le m, 1\le j\le n}|R_{ij}(M_{N}^{1/2}\theta )| =CM_{N}^{1/2}\max _{i,j}|V_{ij}^{T}\theta |\le CLL^{*}M_{N}N^{-1/2+1/(2\kappa )}.\qquad \end{aligned}$$
(6.4)

By Assumption 1, we have

$$\begin{aligned} \begin{array}{ll} \sigma ^{2}(\tilde{q}) &{} =\sum _{(i,j)\in A_{\mathbf {0}\mathbf {0}}}ER_{ij}^{2}(M_{N}^{1/2}\theta ) \\ &{}\quad +\sum _{(i,j)\in A_{\mathbf {0}\mathbf {0}}}\sum _{(i',j')\ne (i,j)\in A_{\mathbf {0}\mathbf {0}}}ER_{ij}(M_{N}^{1/2}\theta )R_{i'j'}(M_{N}^{1/2}\theta ), \end{array} \end{aligned}$$
(6.5)

where \(A_{\mathbf {0}\mathbf {0}}=(0, \tilde{p}_{1}]\times (0, \tilde{p}_{2}]\). Using Assumptions 1, 5 and 6 and Taylor expansion, we obtain

$$\begin{aligned} \begin{array}{ll} \sum _{(i,j)\in A_{\mathbf {0}\mathbf {0}}}ER_{ij}^{2}(M_{N}^{1/2}\theta ) &{} \le C\sum _{(i,j)\in A_{\mathbf {0}\mathbf {0}}}E(M_{N}^{1/2}|V_{ij}^{T}\theta |(M_{N}(V_{ij}^{T}\theta )^{2} +e_{ij}^{2})) \\ &{} \le CL^{3}p^{*}M_{N}^{3}N^{-3/2}. \end{array}\nonumber \\ \end{aligned}$$
(6.6)

Let \(c_{Nk}=[M_{N}^{\delta /(2+\delta )a}]\) for \(k=1,2\), where \(a>2(4+\delta )/(2+\delta )\) is a constant. Let the set \(\{(i,j)\ne (i',j')\in A_{\mathbf {0}\mathbf {0}}\}\) be split into the following two parts

$$\begin{aligned}\begin{array}{l} \mathbf {S}_{1}=\{(i,j)\ne (i',j')\in A_{\mathbf {0}\mathbf {0}}: |i-i'|\le c_{N1}, |j-j'|\le c_{N2}\},\\ \mathbf {S}_{2}=\{(i,j)\ne (i',j')\in A_{\mathbf {0}\mathbf {0}}: |i-i'|> c_{N1} \; \text{ or } \; |j-j'|> c_{N2}\}.\\ \end{array} \end{aligned}$$

By (6.6), we get

$$\begin{aligned} \begin{array}{l} \sum _{(i,j),(i',j')\in \mathbf {S}_{1}}|ER_{ij}(M_{N}^{1/2}\theta )R_{i'j'}(M_{N}^{1/2}\theta )| =O(p^{*}M_{N}^{3+2\delta /(2+\delta ) a}N^{-3/2}). \end{array}\qquad \end{aligned}$$
(6.7)

Using Lemma 5.1 of Hallin et al. (2004b) and Assumptions 1, 3 and 5 and the fact that \(\chi _{k}(u)\chi _{l}(u)=0\) for \(k, l=1,\ldots ,M_{N},k\ne l\), we deduce that

$$\begin{aligned} \begin{array}{l} \sum _{(i,j),(i',j')\in \mathbf {S}_{2}}|ER_{ij}(M_{N}^{1/2}\theta )R_{i'j'}(M_{N}^{1/2}\theta )|\\ \quad \le \sum _{(i,j),(i',j')\in \mathbf {S}_{2}}C (E(|R_{ij}(M_{N}^{1/2}\theta )|^{2+\delta }))^{2/(2+\delta )} (\varphi (\Vert (i',j')-(i,j)\Vert ))^{\delta /(2+\delta )}\\ \quad \le \sum _{(i,j),(i',j')\in \mathbf {S}_{2}}C (E(|T_{ij}(M_{N}^{1/2}\theta )|^{2+\delta }))^{2/(2+\delta )} (\varphi (\Vert (i',j')-(i,j)\Vert ))^{\delta /(2+\delta )}\\ \quad \le \sum _{(i,j),(i',j')\in \mathbf {S}_{2}}C[E(|M_{N}^{1/2}V_{ij}^{T}\theta |^{\delta } |M_{N}^{1/2}V_{ij}^{T}\theta |(M_{N}(V_{ij}^{T}\theta )^{2} +e_{ij}^{2}))]^{2/(2+\delta )}\\ \quad \times (\varphi (\Vert (i',j')-(i,j)\Vert ))^{\delta /(2+\delta )}\\ \quad \le CM_{N}^{2(3+\delta )/(2+\delta )}N^{-(3+\delta )/(2+\delta )} \sum _{(i,j),(i',j')\in \mathbf {S}_{2}}(\varphi (\Vert (i',j')-(i,j)\Vert ))^{\delta /(2+\delta )}. \end{array} \end{aligned}$$

Using arguments similar to those used in the proof of Lemma 6.2 of Hallin et al. (2004b) and noting that \(\varphi (t)=O(e^{-\varsigma t})\) and \(c_{Nk}=[M_{N}^{\delta /(2+\delta )a}]\), we have

$$\begin{aligned}\begin{array}{ll} \sum _{(i,j),(i',j')\in \mathbf {S}_{2}}(\varphi (\Vert (i',j')-(i,j)\Vert ))^{\delta /(2+\delta )} &{} \le C p^{*}\sum _{k=1}^{2}\sum _{t=c_{Nk}}^{\Vert \tilde{p}\Vert } t(\varphi (t))^{\delta /(2+\delta )} \\ &{} \le Cp^{*}\exp (-\varsigma _{1}[M_{N}^{\delta /(2+\delta )a}]\delta /(2+\delta )), \end{array} \end{aligned}$$

where \(0<\varsigma _{1}<\varsigma \) is some constant. Therefore

$$\begin{aligned} \begin{array}{l} \sum _{(i,j),(i',j')\in \mathbf {S}_{2}}|ER_{ij}(M_{N}^{1/2}\theta )R_{i'j'}(M_{N}^{1/2}\theta )| =o(p^{*}M_{N}^{3+2\delta /(2+\delta ) a}N^{-3/2}).\qquad \end{array} \end{aligned}$$
(6.8)

Combining (6.3)–(6.6), we conclude that

$$\begin{aligned} \sigma ^{2}(\tilde{q})=O(p^{*}M_{N}^{3+2\delta /(2+\delta ) a}N^{-3/2}). \end{aligned}$$
(6.9)

Set \(\Theta =\{\theta =(\theta _{1},\ldots ,\theta _{K_{N}}): \Vert \theta \Vert \le L\}\) and \(|\theta |=\max _{1\le i\le K_{N}}|\theta _{i}|\), where \(K_{N}=d_{2}+d_{1}M_{N}(s+1)\). Let \(\Theta \) be divided into \(J_{N}\) disjoint parts \(\Theta _{1},\cdots , \Theta _{J_{N}}\) such that for any \(\pi _{k}\in \Theta _{k},1\le k\le J_{N}\) and any sufficient small \(\varepsilon >0\),

$$\begin{aligned}\begin{array}{l} \sup _{\theta \in \Theta _{k}}|R_{N}(M_{N}^{1/2}\theta ) -R_{N}(M_{N}^{1/2}\pi _{k})| \\ \quad \le \sum _{i=1}^{m}\sum _{j=1}^{n}\sup _{\theta \in \Theta _{k}}|\int _{e_{ij}-M_{N}^{1/2}V_{ij}^{T}\pi _{k}}^{e_{ij}-M_{N}^{1/2}V_{ij}^{T}\theta }[\psi (\varepsilon _{i}+t)-\psi (\varepsilon _{i})]dt\\ \qquad -E\left( \int _{e_{ij}-M_{N}^{1/2}V_{ij}^{T}\pi _{k}}^{e_{ij}-M_{N}^{1/2}V_{ij}^{T}\theta }[\psi (\varepsilon _{i}+t)-\psi (\varepsilon _{i})]dt|X_{ij},Z_{ij},U_{ij}\right) | \\ \quad \le CM_{N}^{1/2} \sum _{i=1}^{m}\sum _{j=1}^{n}\sup _{\theta \in \Theta _{k}}|V_{ij}^{T}(\theta -\pi _{k})| \\ \quad \le CL^{*}M_{N}^{3/2}N^{1/2+1/(2\kappa )}\sup _{\theta \in \Theta _{k}}|\theta -\pi _{k}| <\varepsilon /2. \end{array} \end{aligned}$$

This can be done with \(J_{N}\le ((4CL^{*}M_{N}^{\frac{3}{2}}N^{\frac{1}{2}+\frac{1}{2\kappa }}L)/\varepsilon )^{K_{N}}\). Let \(\varepsilon _{N1}=(M_{N}\log N)^{3}/m^{1-1/\kappa }\) and \(\varepsilon _{N2}=(M_{N}\log N)^{3}/n^{1-1/\kappa }\). By Assumption 6, we have \(\varepsilon _{N1}\rightarrow 0\) and \(\varepsilon _{N2}\rightarrow 0\). Take \(\tilde{q}_{1}=[(M_{N}\log N)^{1/2}m^{1/2+1/(2\kappa )}/\varepsilon _{N1}^{1/4}]\), \(\tilde{q}_{2}=[(M_{N}\log N)^{1/2}n^{1/2+1/(2\kappa )}/\varepsilon _{N2}^{1/4}]\), then \(\tilde{p}_{1}=O(M_{N}\log N/\varepsilon _{N1}^{1/4})\) and \(\tilde{p}_{2}=O(M_{N}\log N/\varepsilon _{N2}^{1/4})\). Therefore, using Lemma 6.1, (6.2) and (6.7), we deduce that

$$\begin{aligned}\begin{array}{l} P\{\sup _{\Vert \theta \Vert \le L}|M_{N}^{-1}R_{N}(M_{N}^{1/2}\theta )I_{\{\max _{i,j}\Vert X_{ij}\Vert \le L^{*},\max _{i,j}\Vert Z_{ij}\Vert \le L^{*}\}}|>\varepsilon \} \\ \quad \le \sum _{k=1}^{J_{N}}P\{|\sum _{i=1}^{m}\sum _{j=1}^{n}R_{ij}(M_{N}^{1/2}\pi _{k})I_{\{\max _{i,j}\Vert X_{ij}\Vert \le L^{*},\max _{i,j}\Vert Z_{ij}\Vert \le L^{*}\}}|>\varepsilon M_{N}/2\}\\ \quad \le \left( \frac{4CL^{*}M_{N}^{\frac{3}{2}}N^{\frac{1}{2}\!+\!\frac{1}{2\kappa }}L}{\varepsilon }\right) ^{K_{N}}\cdot 4\left\{ 2\exp \left( \!-\!\frac{M_{N}^{2}\varepsilon ^{2}q^{*}}{16(16N^{2} \sigma ^{2}(\tilde{q})/{p^{*}}^{2}\!+\!V_{0}NM_{N}\varepsilon )}\right) \! +\!\frac{8N V_{0}}{M_{N}\varepsilon } \varphi (\min (\tilde{p}_{1},\tilde{p}_{2})\right\} \\ \quad \le 8\{\exp \left( 2sd_{1}M_{N}\log N [1-\frac{M_{N}\varepsilon ^{2}q^{*}}{32sd_{1}\log N (16N^{2} \sigma ^{2}(\tilde{q})/{p^{*}}^{2}+V_{0}NM_{N}\varepsilon )}]\right) \\ \qquad +\exp (2sd_{1}M_{N}\log N[1-\varpi \min (\tilde{p}_{1},\tilde{p}_{2})/(2sd_{1}M_{N}\log N)])\}=o(1), \end{array} \end{aligned}$$

where \(V_{0}=CLL^{*}M_{N}N^{-1/2+1/(2\kappa )}\). Now Lemma 6.3 follows from (6.3) and the above expression.

Lemma 6.4

Suppose that Assumptions 1–6 hold. Then \(\Vert \hat{\theta }\Vert =O_{p}(M_{N}^{1/2})\).

Proof

By Assumptions 2 and 3, \(\max _{1\le i\le m,1\le j\le n}(|e_{ij}|+|M_{N}^{1/2}V_{ij}^{T}\theta | =o(1)\), then by Assumption 5, we obtain that

$$\begin{aligned} \begin{array}{ll} &{}M_{N}^{-1}\Gamma _{N}(M_{N}^{1/2}\theta ) \!=\!M_{N}^{-1}\sum _{i=1}^{m}\sum _{j=1}^{n} \int _{e_{ij}}^{e_{ij}-M_{N}^{1/2}V_{ij}^{T}\theta }E(\psi (\varepsilon _{ij}\!+\!t)|X_{ij},Z_{ij},U_{ij})dt \\ &{}\qquad \qquad \qquad \quad \quad \quad \ge \frac{1}{2}\sum _{i=1}^{m}\sum _{j=1}^{n}\phi (X_{ij},U_{ij})[(\tilde{Z}_{ij}^{T}\theta _{1})^{2} \!+\!(\tilde{B}_{ij}^{T}\theta _{2})^{2} \!+\!2\tilde{Z}_{ij}^{T}\theta _{1}\tilde{B}_{ij}^{T}\theta _{2}] \\ &{}\qquad \qquad \qquad \qquad \qquad -\tilde{c}_{1}NM_{N}^{-[2(s+1)\!+\!1]}\max (\max _{i,j}\Vert X_{ij}\Vert ^{2},\max _{i,j}\Vert Z_{ij}\Vert ^{2})\!+\!o_{p}(1), \end{array}\nonumber \\ \end{aligned}$$
(6.10)

where \(\tilde{c}_{1}\) are positive constants. Since \(\tilde{Z}_{ij}^{T}\theta _{1}\tilde{B}_{ij}^{T}\theta _{2}= \frac{1}{Nh_{0}^{1/2}}\theta _{1}^{T}[Z_{ij}-g^{*}(X_{ij},U_{ij})] B_{ij}^{T}\theta _{2}\) and \(Z_{ij}-g^{*}(X_{ij},U_{ij})=Z_{ij}-E(Z_{ij}|X_{ij},U_{ij})+E(Z_{ij}|X_{ij},U_{ij})-g^{*}(X_{ij},U_{ij})\). Using the fact that \(g^{*}(x,u)\) is the projection of \(E(Z_{ij}|X_{ij}=x,U_{ij}=u)\) onto the varying coefficient functional space \(\mathcal {Y}\) and \(B_{ij}^{T}\theta _{2}\in \mathcal {Y}\), we have \(E\phi (X_{ij},U_{ij})[Z_{ij}-g^{*}(X_{ij},U_{ij})] B_{ij}^{T}\theta _{2}=0\). Let \(Q_{N}=\{(i,j): 1\le i\le m, 1\le j\le n\}\) and

$$\begin{aligned}\begin{array}{l} \mathbf {S}_{1}^{*}=\{(i,j)\ne (i',j')\in Q_{N}: |i-i'|\le c_{N1}, |j-j'|\le c_{N2}\},\\ \mathbf {S}_{2}^{*}=\{(i,j)\ne (i',j')\in Q_{N}: |i-i'|> c_{N1} \; \text{ or } \; |j-j'|> c_{N2}\}.\\ \end{array} \end{aligned}$$

Similar to the proof of (6.9), we deduce that

$$\begin{aligned} \sum _{i=1}^{m}\sum _{j=1}^{n}\phi (X_{ij},U_{ij})\tilde{Z}_{ij}^{T}\theta _{1}\tilde{B}_{ij}^{T}\theta _{2}=o_{p}(1). \end{aligned}$$
(6.11)

Using arguments similar to those used in the proof of Lemma 3 of Tang and Cheng (2009), we can prove that there are positive constants \(\tilde{C}_{1}\) and \(\tilde{C}_{2}\) such that, except on an event whose probability tends to zero, all the eigenvalues of \(\sum _{i=1}^{m}\sum _{j=1}^{n}\tilde{B}_{ij}\tilde{B}_{ij}^{T}\) fall between \(\tilde{C}_{1}\) and \(\tilde{C}_{2}\). Hence by Assumption 6 and (6.2), we conclude that

$$\begin{aligned} M_{N}^{-1}\Gamma _{N}(M_{N}^{1/2}\theta )\ge \tilde{c}_{2}\Vert \theta \Vert ^{2}+o_{p}(1), \end{aligned}$$
(6.12)

where \(\tilde{c}_{2}\) is a positive constant. It is easy to prove that

$$\begin{aligned} E\left\| \sum _{i=1}^{m}\sum _{j=1}^{n}\psi (\varepsilon _{ij})V_{ij}\right\| =o(M_{N}^{1/2}). \end{aligned}$$

Hence

$$\begin{aligned} \begin{array}{ll} M_{N}^{-1/2}E|\sum _{i=1}^{m}\sum _{j=1}^{n}V_{ij}^{T}\theta \psi (\varepsilon _{ij})| \!\le \! CM_{N}^{-1/2}\Vert \theta \Vert E\Vert \sum _{i=1}^{m}\sum _{j=1}^{n}\psi (\varepsilon _{ij})V_{ij}\Vert \!=\!o(1). \end{array}\nonumber \\ \end{aligned}$$
(6.13)

Combining (6.1), (6.12), (6.13) and Lemma 6.3, for sufficiently large \(L\), we deduce that

$$\begin{aligned} P\left( \inf _{\Vert \theta \Vert =L}M_{N}^{-1}S_{N}(M_{N}^{1/2}\theta )>0\right) \rightarrow 1, \end{aligned}$$

which implies, by the convexity of \(\rho \), that

$$\begin{aligned} P\left( \inf _{\Vert \theta \Vert \ge LM_{N}^{1/2}}\sum _{i=1}^{m}\sum _{j=1}^{n}\rho (\varepsilon _{ij}+e_{ij}-V_{ij}^{T}\theta ) >\sum _{i=1}^{m}\sum _{j=1}^{n}\rho (\varepsilon _{ij}+e_{ij})\right) \rightarrow 1. \end{aligned}$$

Therefore \(\Vert \hat{\theta }\Vert =O_{p}(M_{N}^{1/2})\). The proof of Lemma 6.4 is finished. \(\square \)

Proof of Theorem 3.1 Let \(\eta =\Lambda ^{-1}\sum _{i=1}^{m}\sum _{j=1}^{n} \psi (\varepsilon _{ij})\tilde{Z}_{ij}\). Using arguments similar to those used in the proof of Lemma 6 of Tang and Cheng (2009), we have \(\sum _{i=1}^{m}\sum _{j=1}^{n} \psi (\varepsilon _{ij})\tilde{Z}_{ij} \longrightarrow _{d} N(0,\Delta )\). Hence, to prove Theorem 3.1, it suffices to prove that for any \(\epsilon >0,P\{\Vert \hat{\theta }_{1}-\eta \Vert <\epsilon \}\rightarrow 1\). Set \(\tilde{S}_{ij}(\theta _{1},\theta _{2})= \rho (\varepsilon _{ij}+e_{ij}-\tilde{Z}_{ij}^{T}\theta _{1}-\tilde{B}_{ij}^{T}\theta _{2}) -\rho (\varepsilon _{ij}+e_{ij}-\tilde{B}_{ij}^{T}\theta _{2})\) and \(\tilde{S}_{N}(\theta _{1},\theta _{2})=\sum _{i=1}^{m}\sum _{j=1}^{n}\tilde{S}_{ij}(\theta _{1},\theta _{2})\). Using the convexity of the absolute-valued function \(|\cdot |\), we need only to show that

$$\begin{aligned} P\left\{ \inf _{\Vert \theta _{1}-\eta \Vert =\epsilon }(\tilde{S}_{N}(\theta _{1},\hat{\theta }_{2})-\tilde{S}_{N}(\eta ,\hat{\theta _{2}}))>0\right\} \rightarrow 1. \end{aligned}$$

By Lemma 4 and definition of \(\eta \), we have \(\Vert \hat{\theta }_{2}\Vert =O_{p}(M_{N}^{1/2})\) and \(\Vert \eta \Vert =O_{p}(1)\). So it suffices to show that for any sufficient large \(L>0,L^{'}>0\)

$$\begin{aligned} P\left( \left\{ \inf _{\Vert \theta _{1}-\eta \Vert =\epsilon ,\Vert \theta _{2}\Vert \le LM_{N}^{1/2}}(\tilde{S}_{N}(\theta _{1},\theta _{2})-\tilde{S}_{N}(\eta ,\theta _{2}))>0\right\} \cap \{\Vert \eta \Vert \le L^{'}\}\right) \rightarrow 1.\nonumber \\ \end{aligned}$$
(6.14)

Set \(\tilde{\Gamma }_{N}(\theta _{1},\theta _{2}) =\sum _{i=1}^{m}\sum _{j=1}^{n}E(\tilde{S}_{ij}(\theta _{1},\theta _{2})|X_{ij},Z_{ij},U_{ij})\) and

$$\begin{aligned} \tilde{R}_{N}(\theta _{1},\theta _{2})=\tilde{S}_{N}(\theta _{1},\theta _{2})-\tilde{\Gamma }_{N}(\theta _{1},\theta _{2})+ \sum _{i=1}^{m}\sum _{j=1}^{n}\tilde{Z}_{ij}^{T}\theta _{1}\psi (\varepsilon _{ij}). \end{aligned}$$

Then

$$\begin{aligned} \tilde{S}_{N}(\theta _{1},\theta _{2})=\tilde{\Gamma }_{N}(\theta _{1},\theta _{2})-\sum _{i=1}^{m}\sum _{j=1}^{n}\tilde{Z}_{ij}^{T}\theta _{1}\psi (\varepsilon _{ij}) +\tilde{R}_{N}(\theta _{1},\theta _{2}). \end{aligned}$$
(6.15)

By Assumptions 1 and 5, we have

$$\begin{aligned}\begin{array}{ll} \tilde{\Gamma }_{N}(\theta _{1},\theta _{2}) &{} =\sum _{i=1}^{m}\sum _{j=1}^{n}\phi (X_{ij},U_{ij}) [\frac{1}{2}(\tilde{Z}_{ij}^{T}\theta _{1})^{2}-\tilde{Z}_{ij}^{T}\theta _{1}e_{ij}\\ &{}\quad +\tilde{Z}_{ij}^{T}\theta _{1}\tilde{B}_{ij}^{T}\theta _{2}][1+o_{p}(1)]. \end{array} \end{aligned}$$

By arguments similar to those used in the proof of Lemma 3 of Tang and Cheng (2009), we deduce that

$$\begin{aligned} \sum _{i=1}^{m}\sum _{j=1}^{n}\phi (X_{ij},U_{ij}) (\tilde{Z}_{ij}^{T}\theta _{1})^{2}=\theta _{1}^{T}\Lambda \theta _{1}+o_{p}(1). \end{aligned}$$

Similar to the proof of (6.9), we get \(\sum _{i=1}^{m}\sum _{j=1}^{n}\phi (X_{ij},U_{ij})\tilde{Z}_{ij}^{T}\theta _{1}e_{ij}=o_{p}(1)\). Hence, by (6.11), we have

$$\begin{aligned} \tilde{S}_{N}(\theta _{1},\theta _{2})=\frac{1}{2}\theta _{1}^{T}\Lambda \theta _{1}-\theta _{1}^{T}\Lambda \eta +\tilde{R}_{N}(\theta _{1},\theta _{2})+o_{p}(1). \end{aligned}$$
(6.16)

Using the fact that \(2\theta _{1}^{T}\Lambda \eta =\theta _{1}^{T}\Lambda \theta _{1}-(\theta _{1}-\eta )^{T}\Lambda (\theta _{1}-\eta )+\eta ^{T}\Lambda \eta \) and that \(\Vert \theta _{1}-\eta \Vert =\epsilon \), we have

$$\begin{aligned} \tilde{S}_{N}(\theta _{1},\theta _{2})\ge \frac{1}{2}\lambda _{min}\epsilon ^{2}-\frac{1}{2}\eta ^{T}\Lambda \eta +\tilde{R}_{N}(\theta _{1},\theta _{2})+o_{p}(1), \end{aligned}$$
(6.17)

where \(\lambda _{\min }\) is the minimum eigenvalue of \(\Lambda \). By (6.16), we get

$$\begin{aligned} \tilde{S}_{N}(\eta ,\theta _{2})=-\frac{1}{2}\eta ^{T}\Lambda \eta +\tilde{R}_{N}(\eta ,\theta _{2})+o_{p}(1). \end{aligned}$$
(6.18)

It follows from (6.17) and (6.18) that

$$\begin{aligned} \tilde{S}_{N}(\theta _{1},\theta _{2})-\tilde{S}_{N}(\eta ,\theta _{2})\ge \frac{1}{2}\lambda _{min}\epsilon ^{2} -2\sup _{\Vert \theta _{1}\Vert \le \tilde{L},\Vert \theta _{2}\Vert \le LM_{N}^{1/2}} |\tilde{R}_{N}(\theta _{1},\theta _{2})|+o_{p}(1). \end{aligned}$$

Using arguments similar to those in the proof of Lemma 6.3, it can be shown that \(\sup _{\Vert \theta _{1}\Vert \le \tilde{L},\Vert \theta _{2}\Vert \le LM_{N}^{1/2}}|\tilde{R}_{N}(\theta _{1},\theta _{2})|=o_{p}(1).\) Hence (6.14) holds and consequently Theorem 3.1 follows.

Proof of Theorem 3.2 Under Assumption 2, by Taylor expansion we have

$$\begin{aligned} \alpha _{r}(U_{ij})=\alpha _{r}(u_{0})+\alpha _{r}^{'}(u_{0})(U_{ij} -u_{0})+\frac{1}{2}\alpha _{r}^{''}(\xi _{ijr})(U_{ij}-u_{0})^{2} \end{aligned}$$

for \(|U_{ij}-u_{0}|\le Mh\), where \(|\xi _{ijr}-u_{0}|<|U_{ij}-u_{0}|\). Let \(a_{0}=(a_{10},\ldots ,a_{d_{1}0})^{T}\), \(a_{1}=(a_{11},\ldots ,a_{d_{1}1})^{T}\), \(\alpha ''(\xi _{ij})=(\alpha _{1}^{''}(\xi _{ij1}),\ldots ,\alpha _{d_{1}}^{''}(\xi _{ijd_{1}}))^{T}\), and \(e_{ij}^{*}=\frac{1}{2}(U_{ij}-u_{0})^{2}\alpha ''(\xi _{ij})^{T}X_{ij}\), \(D_{ij}=(Nh)^{-1/2}(1,h^{-1}(U_{ij} -u_{0}))^{T}\otimes X_{ij}\), \(\vartheta =(Nh)^{1/2}((a_{0}-\alpha (u_{0}))^{T},h(a_{1}-\alpha '(u_{0}))^{T})^{T}\), \(\bar{Z}_{ij}=N^{-1/2}Z_{ij}\), where \(\otimes \) is the Kronecker product. We consider the following new optimization problem

$$\begin{aligned} \hat{\vartheta }&= Argmin_{\vartheta }\sum _{i=1}^{m}\sum _{j=1}^{n} [\rho (\varepsilon _{ij}+e_{ij}^{*}-D_{ij}^{T}\vartheta -\bar{Z}_{ij}^{T}\hat{\theta }_{1})\\&\quad -\rho (\varepsilon _{ij}+e_{ij}^{*}-\bar{Z}_{ij}^{T}\hat{\theta }_{1})]K\left( \frac{U_{ij}-u_{0}}{h}\right) . \end{aligned}$$

Clearly, \(\hat{\vartheta }=(Nh)^{1/2}((\hat{a}_{0}-\alpha (u_{0}))^{T},h(\hat{a}_{1}-\alpha '(u_{0}))^{T})^{T}\). Let \(\vartheta ^{*}= \frac{1}{2}N^{1/2}h^{5/2}P^{*}\otimes \alpha ^{''}(u_{0})\) and \(\tilde{\vartheta }=\vartheta ^{*}+\frac{1}{f(u_{0})}(P^{-1}\otimes \Omega ^{-1}(u_{0})) \sum _{i=1}^{m}\sum _{j=1}^{n}D_{ij}\psi (\varepsilon _{ij})K(\frac{U_{ij}-u_{0}}{h})\), where \(P^{*}=(\mu _{0}^{-1}\mu _{2},0)^{T}\) and \(P=diag(\mu _{0},\mu _{2})\). Under Assumptions of Theorem 3.2, using arguments similar to those used in the proof of Lemma 3.1 of Hallin et al. (2004b), we can show that

$$\begin{aligned} \sum _{i=1}^{m}\sum _{j=1}^{n}D_{ij}\psi (\varepsilon _{ij})K\left( \frac{U_{ij} -u_{0}}{h}\right) \rightarrow _{d} N(0,f(u_{0})\tilde{P}\otimes \Sigma (u_{0})), \end{aligned}$$

where \(\tilde{P}=diag(\nu _{0},\nu _{2})\). Therefore, to complete the proof of Theorem 3.2, it suffices to prove that for any \(\epsilon >0\),

$$\begin{aligned} P\{\Vert \hat{\vartheta }-\tilde{\vartheta }\Vert <\epsilon \}\rightarrow 1. \end{aligned}$$
(6.19)

Set \(S_{ij}^{*}(\vartheta ,\theta _{1})= [\rho (\varepsilon _{ij}+e_{ij}^{*}-D_{ij}^{T}\vartheta -\bar{Z}_{ij}^{T}\theta _{1}) -\rho (\varepsilon _{ij}+e_{ij}^{*}-\bar{Z}_{ij}^{T}\theta _{1})]K(\frac{U_{ij}-u_{0}}{h})\), \(S_{N}^{*}(\vartheta ,\theta _{1})=\sum _{i=1}^{m}\sum _{j=1}^{n}S_{ij}^{*}(\vartheta ,\theta _{1})\). and \(\Gamma _{N}^{*}(\vartheta ,\theta _{1}) =\sum _{i=1}^{m}\sum _{j=1}^{n}E(S_{ij}^{*}(\vartheta ,\theta _{1})|X_{ij},Z_{ij},U_{ij})\). By Assumptions 3, 5 and 7, we deduce that

$$\begin{aligned}\begin{array}{ll} \Gamma _{N}^{*}(\vartheta ,\theta _{1}) &{}=\sum _{i=1}^{m}\sum _{j=1}^{n} \int _{e_{ij}^{*}-\bar{Z}_{ij}^{T}\theta _{1}}^{e_{ij}^{*} \!-\!D_{ij}^{T}\vartheta -\bar{Z}_{ij}^{T}\theta _{1}}E(\psi (\varepsilon _{ij}+t)|X_{ij},Z_{ij},U_{ij})dt \cdot K(\frac{U_{ij}-u_{0}}{h}) \\ &{}=\sum _{i=1}^{m}\sum _{j=1}^{n}\phi (X_{ij},U_{ij})[\frac{1}{2} (D_{ij}^{T}\vartheta )^{2}-e_{ij}^{*}D_{ij}^{T}\vartheta \\ &{}\quad +D_{ij}^{T}\vartheta \bar{Z}_{1ij}^{T}\theta _{1}] K(\frac{U_{ij}-u_{0}}{h})+o_{p}(1). \end{array}\end{aligned}$$

Using arguments similar to those used in the proof of Lemma 2.1 of Hallin et al. (2004b), we deduce that

$$\begin{aligned} \sum _{i=1}^{m}\sum _{j=1}^{n}\phi (X_{ij},U_{ij})(D_{ij}^{T}\vartheta )^{2}K\left( \frac{U_{ij}-u_{0}}{h}\right) =f(u_{0})\vartheta ^{T}(P\otimes \Omega (u_{0}))\vartheta +o_{p}(1),\\ \sum _{i=1}^{m}\sum _{j=1}^{n}\phi (X_{ij},U_{ij})e_{ij}^{*}D_{ij}^{T}\vartheta =\frac{1}{2}(Nh)^{1/2}h^{2}f(u_{0})\vartheta ^{T}[\tilde{\kappa }\otimes (\Omega (u_{0})\alpha ^{''}(u_{0}))]+o_{p}(1), \end{aligned}$$

where \(\tilde{\kappa }=(\mu _{2},0)^{T}\). Similar to the proof of Theorem 3.1 and using the fact that \(\sum _{i=1}^{m}\sum _{j=1}^{n}E|\phi (X_{ij},U_{ij})D_{ij}^{T}\vartheta \bar{Z}_{1ij}^{T}\theta _{1}| K(\frac{U_{ij}-u_{0}}{h})=O(h^{1/2})=o(1)\), we can prove that (6.19) holds. The proof of Theorem 3.2 is finished.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qingguo, T. Robust estimation for spatial semiparametric varying coefficient partially linear regression. Stat Papers 56, 1137–1161 (2015). https://doi.org/10.1007/s00362-014-0629-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-014-0629-z

Keywords

Navigation