Abstract
Conventional higher-order spatial autoregressive models assume that all regression coefficients are constant, which ignores dynamic feature that may exist in spatial data. In this paper, we introduce a semiparametric dynamic higher-order spatial autoregressive model by allowing regression coefficients in classical higher-order spatial autoregressive models to smoothly vary with a continuous explanatory variable, which enables us to explore dynamic feature in spatial data. We develop a sieve two-stage least squares method for the proposed model and derive asymptotic properties of resulting estimators. Furthermore, we develop two testing methods to check appropriateness of certain linear constraint condition on the spatial lag parameters and stationarity of the regression relationship, respectively. Simulation studies show that the proposed estimation and testing methods perform quite well in finite samples. The Boston house price data are finally analyzed to demonstrate the proposed model and its estimation and testing methods.
Similar content being viewed by others
References
Badinger H, Egger P (2011) Estimation of higher-order spatial autoregressive cross-section models with heteroscedastic disturbances. Pap Reg Sci 90:213–235
Badinger H, Egger P (2013) Estimation and testing of higher-order spatial autoregressive panel data error component models. J Geogr Syst 15:453–489
Cheng SL, Chen JB (2021) Estimation of partially linear single-index spatial autoregressive model. Stat Pap 62:495–531
Du J, Sun XQ, Cao RY, Zhang ZZ (2018) Statistical inference for partially linear additive spatial autoregressive models. Spat Stat 25:52–67
Fan JQ, Huang T (2005) Profile likelihood inferences on semiparametric varying-coefficient partially linear models. Bernoulli 11:1031–1057
Fan JQ, Jiang JC (2007) Nonparametric inference with generalized likelihood ratio tests. Test 16:409–444
Gilley OW, Pace RK (1996) On the Harrison and Rubinfeld data. J Environ Econ Manag 31:403–405
Guo JC, Qu X (2019) Spatial interactive effects on housing prices in Shanghai and Beijing. Reg Sci Urban Econ 76:147–160
Gupta A, Robinson PM (2015) Inference on higher-order spatial autoregressive models with increasingly many parameters. J Econ 186:19–31
Gupta A, Robinson PM (2018) Pseudo maximum likelihood estimation of spatial autoregressive models with increasing dimension. J Econ 202:92–107
Hall P, Hart JD (1990) Bootstrap test for difference between means in nonparametric regression. J Am Stat Assoc 412:1039–1049
Han XY, Hsieh CS, Lee LF (2017) Estimation and model selection of higher-order spatial autoregressive model: an efficient Bayesian approach. Reg Sci Urban Econ 63:97–120
Härdle W, Mammen E (1993) Comparing nonparametric versus parametric regression fits. Ann Stat 21:1926–1947
Harrison D, Rubinfeld DL (1978) Hedonic housing prices and the demand for clean air. J Environ Econ Manag 5:81–102
Kang XJ, Li TZ (2008) Testing a linear relationship in varying coefficient spatial autoregressive models. Commun Stat Simul Comput 47:187–205
Kang XJ, Li TZ (2022) Estimation and testing of a higher-order partially linear spatial autoregressive model. J Stat Comput Simul 92:3167–3201
Kelejian HH, Prucha IR (2010) Specification and estimation of spatial autoregressive models with autoregressive and heteroskedastic disturbances. J Econ 157:53–67
Lee LF (2004) Asymptotic distributions of quasi-maximum likelihood estimators for spatial autoregressive models. Econometrica 72:1899–1925
Lee LF (2007) GMM and 2SLS estimation of mixed regressive, spatial autoregressive models. J Econ 137:489–514
Lee LF, Liu XD (2010) Efficient GMM estimation of high order spatial autoregressive models. Econ Theory 26:187–230
Li KM, Chen JB (2013) Profile maximum likelihood estimation of semi-parametric varying coefficient spatial lag model. J Quant Tech Econ 30:85–98
Li DK, Mei CL, Wang N (2019) Tests for spatial dependence and heterogeneity in spatially autoregressive varying coefficient models with application to Boston house price analysis. Reg Sci Urban Econ 79:103470
Liao J, Wen L, Yin JX (2021) Model selection and averaging for higher-order spatial autoregressive model. J Syst Sci Math Sci 41:1400–1417
Lin X, Lee LF (2010) GMM estimation of spatial autoregressive models with unknown heteroskedasticity. J Econ 157:34–52
Lin X, Weinberg B (2014) Unrequited friendship? how reciprocity mediates adolescent peer effects. Reg Sci Urban Econ 48:144–153
Luo GW, Wu MX (2021) Variable selection for semiparametric varying-coefficient spatial autoregressive models with a diverging number of parameters. Commun Stat Theory Methods 50:2062–2079
Ma SJ, Yang LJ (2011) Spline-backfitted kernel smoothing of partially linear additive model. J Stat Plan Inference 141:204–219
Newey WK (1997) Convergence rates and asymptotic normality for series estimators. J Econ 79:147–168
Su LJ, Jin SN (2010) Profile quasi-maximum likelihood estimation of partially linear spatial autoregressive models. J Econ 157:18–33
Sun Y, Yan HJ, Zhang WY, Lu ZD (2014) A semiparametric spatial dynamic model. Ann Stat 42:700–727
Tao J (2005) Spatial econometrics: models, methods and applications. Phd thesis, Department of Economics, Ohio State University. https://www.docin.com/p-504667641.html
Wakefield J (2007) Disease mapping and spatial regression with count data. Biostatistics 8:158–183
Xu GY, Bai Y (2021) Estimation of nonparametric additive models with high order spatial autoregressive errors. Can J Stat 49:311–343
Yang ZL (2018) Bootstrap LM tests for higher-order spatial effects in spatial linear regression models. Empir Econ 55:35–68
Zhang R, Zhou J, Lan W, Wang HS (2022) A case study on the shareholder network effect of stock market data: an SARMA approach. Sci China Math 65:2219–2242
Zhang YQ, Li H, Feng YQ (2023) Inference for partially linear additive higher-order spatial autoregressive model with spatial autoregressive error and unknown heteroskedasticity. Commun Stat Simul Comput 52:898–924
Acknowledgements
The authors are grateful to editor Werner G. M\({\ddot{\textrm{u}}}\)ller and reviewers for their constructive comments and suggestions, which lead to an improved version of this paper. This research was supported by the Natural Science Foundation of Shaanxi Province [grant 2021JM349] and the National Natural Science Foundation of China [grants 11972273 and 52170172].
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
In this appendix, we give detailed technical proofs of Theorems 1–5 in Sects. 2 and 3. The following three facts are frequently used in our proofs.
Fact 1. If the row and column sums of \(n{\times }n\) matrices \({\textbf{A}}_{n1}\) and \({\textbf{A}}_{n2}\) are uniformly bounded in absolute value, then the row and column sums of \({\textbf{A}}_{n1}{\textbf{A}}_{n2}\) and \({\textbf{A}}_{n2}{\textbf{A}}_{n1}\) are also uniformly bounded in absolute value.
Fact 2. The largest eigenvalue of an idempotent matrix is at most one.
Fact 3. For any \(n{\times }n\) matrix \({\textbf{B}}_{n}\), its spectral radius is bounded by \({\textrm{max}}_{1{\le }i{\le }n} \sum _{j=1}^{n}|b_{n,ij}|\), where \(b_{n,ij}\) is the (i, j)th element of \({\textbf{B}}_{n}\).
Proof of Theorem 1
By Eq. (5) and noticing \(({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{D}}_{n}={\textbf{0}}\), we obtain
where \({\textbf{R}}_{n}={\textbf{M}}_{n}-{\textbf{D}}_{n}{\varvec{\gamma }}_{0}\) and \(\widetilde{\textbf{R}}_{n}=({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\).
First, we consider \(\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n}\). Let \({\overline{\varvec{\varepsilon }}}_{n}=({\textbf{G}}_{n1}{\varvec{\varepsilon }}_{n}, \ldots ,{\textbf{G}}_{nr}{\varvec{\varepsilon }}_{n})\), where \({\textbf{G}}_{nj}={\textbf{G}}_{nj}({\varvec{\rho }}_{0})\) (\(j=1,\ldots ,r\)). Then, \({\textbf{Z}}_{n}={\overline{\textbf{Z}}}_{n}+{\overline{\varvec{\varepsilon }}}_{n}\). Thus, \(\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n}\) can be decomposed into
where \(B_{n1}=\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\varvec{\varepsilon }}_{n}\), \(B_{n2}=\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\varvec{\varepsilon }}_{n}\) and \(B_{n3}=\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\textbf{Z}}_{n}\).
For \(i,j=1,\ldots ,r\), it follows from Assumption 1.3 and Fact 1 that the row sums of \({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}\) are uniformly bounded in absolute value. Hence, we obtain \({\eta }_{\max }({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}})={{O}}(1)\) by Fact 3. This together with Fact 2 yields
Combining this with Markov’s inequality yields
Thus, we have \(B_{n1}={{O}}_{P}(1)\).
For \(i=1,\ldots ,r\), it follows from Assumption 1.3 and Facts 1 and 3 that \({\eta }_{\max }({\textbf{G}}_{ni}{\textbf{G}}_{ni}^{\textrm{T}})={{O}}(1)\). This together with Fact 2 and Assumption 3.4 yields
This means that
Therefore, we have \(B_{n2}={{O}}_{P}(1)\). Similarly, we have \(B_{n3}={{O}}_{P}(1)\). By combining the convergence rates of \(B_{n1}\), \(B_{n2}\) and \(B_{n3}\), we obtain
Next, we consider \(\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{R}}_{n}\). Obviously,
where \(B_{n4}=\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\) and \(B_{n5}=\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\). We first show that \(B_{n4}={{O}}({\sqrt{n}}K^{-{\delta }})\). Let \({{R}}_{n,i}\) be the ith element of \({\textbf{R}}_{n}\), then \({{R}}_{n,i}=\sum _{j=1}^{q}x_{n,ij}[{\beta }_{0j}({U}_{n,i})-{\textbf{B}}({U}_{n,i})^{\textrm{T}} {\varvec{\gamma }}_{j0}]\). It follows from Assumption 3.1 that there exists a constant \(c_{X}>0\) such that \(\max _{1{\le }i{\le }n,1{\le }j{\le }q}|x_{n,ij}|{\le }c_{X}\) for all \(n{\ge }1\). This together with Assumption 4.1 yields
This yields
Similar to the proof of \(B_{n2}\), we have
Thus, we have \(B_{n4}={{O}}({\sqrt{n}}K^{-{\delta }})\). For \(i=1,\ldots ,r\), similar to the proof of (A.2), we have
This implies that
Therefore, we have \(B_{n5}={{O}}_{P}({\sqrt{n}}K^{-{\delta }})\). By combining the convergence rates of \(B_{n4}\) and \(B_{n5}\), we obtain
Last, we consider \(\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\varvec{\varepsilon }}_{n}\). It directly follows from (A.2) that
By an analogous proof to that of (A.2), we can show that
By combining (A.7), (A.8) and Cauchy–Schwarz inequality, we obtain
This implies that \(\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}={{O}}_{P}(1)\). Thus, we have
By combining (A.4), (A.6) and (A.9), we obtain
Invoking the central limit theorem and Slutsky’s Lemma, we have
Thus, we complete the proof of Theorem 1. \(\square \)
Proof of Theorem 2
First, we consider the convergence rate of \({\widehat{\varvec{\gamma }}}\). By simple calculation, we have
where \(B_{n6}=({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{Z}}_{n} ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\), \(B_{n7}=({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\) and \(B_{n8}=({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\).
By Assumption 4.4, Fact 2 and \({\textbf{Z}}_{n}=\overline{\textbf{Z}}_{n}+\overline{\varvec{\varepsilon }}_{n}\), we have
where \(B_{n61}={\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}}\overline{\textbf{Z}}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\), \(B_{n62}={\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}\overline{\varvec{\varepsilon }}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\) and \(B_{n63}=2{\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}}\overline{\varvec{\varepsilon }}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\).
From Theorem 1, we obtain \(\Vert \widehat{\varvec{\rho }}-{\varvec{\rho }}_{0}\Vert ^{2}= {{O}}_{P}(n^{-1})\). This together with Assumption 3.4 yields
For \(i,j=1,\ldots ,r\), it follows from Assumption 1.3 and Facts 1 and 3 that \( {\textrm{E}}(n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}{\textbf{G}}_{nj}{\varvec{\varepsilon }}_{n}) =n^{-1}{\sigma }_{0}^{2}{\textrm{tr}}({\textbf{G}}_{ni}^{\textrm{T}}{\textbf{G}}_{nj}) =n^{-1}{\sigma }_{0}^{2}{\textrm{tr}}({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}) {\le }{\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}) ={{O}}(1). \) This implies that \(n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}{\textbf{G}}_{nj}{\varvec{\varepsilon }}_{n} ={\textrm{O}}_{P}(1)\). Thus, we have
By using Cauchy–Schwarz inequality and the orders of \(B_{n61}\) and \(B_{n62}\), we have
Combining the orders of \(B_{n61}\), \(B_{n62}\) and \(B_{n63}\), we obtain \(\Vert B_{n6}\Vert ={{O}}_{P}(n^{-1/2})\).
By Assumption 4.4, Fact 2 and (A.5), we obtain
This means that \(\Vert B_{n7}\Vert ={{O}}(K^{-{\delta }})\).
It follows from Assumption 4.4 that
This implies that \(\Vert B_{n8}\Vert ={{O}}_{P}({\sqrt{K/n}})\). By triangle inequality and the orders of \(B_{n6}\), \(B_{n7}\) and \(B_{n8}\), we have
Next, we consider the uniform convergence rate of \({\widehat{\varvec{\beta }}}(u)\). By the definition of \({\widehat{\varvec{\beta }}}(u)\), convergence rate of \({\widehat{\varvec{\gamma }}}\), and Assumptions 4.1 and 4.3, we obtain
This yields
Finally, we consider the limiting distribution of \({\widehat{\varvec{\beta }}}(u)\). By the definition of \({\widehat{\varvec{\beta }}}(u)\) and Assumption 4.1, we have
By combining the convergence rates of \(B_{n6}\) and \(B_{n7}\) and Assumption 4.3, we have
It follows from Assumption 4.5 and \({\zeta }(K){\rightarrow }{\infty }\) as \(n{\rightarrow }{\infty }\) that \(n^{-1/2}{\zeta }(K)={{o}}(1)\). This together with \({\sqrt{n}}K^{-{\delta }}={\textrm{o}}(1)\) yields \({\zeta }(K)K^{-{\delta }}=(n^{-1/2}{\zeta }(K))({\sqrt{n}}K^{-{\delta }}) ={{o}}(1)\). Thus, we have
According to the Cram\(\mathrm {\acute{e}}\)r-Wold device, it is sufficient to prove
for any nonzero \(q{\times }1\) vector of constants \({\textbf{c}}\). Let \({\textbf{d}}_{n,i}=({\textbf{I}}_{q}{\otimes }{\textbf{B}}(U_{n,i})){\textbf{X}}_{n,i}\) (\(i=1,\ldots ,n\)), then
where \({\xi }_{n,i}=[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2}{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{d}}_{n,i}{\varepsilon }_{n,i}\). It follows from Assumptions 3.1 and 4.3 that
This together with Assumptions 4.3 and 4.4 yields
This together with Assumptions 2 and 4.5 yields
Combining this result with Lyapunov central limit theorem, we obtain
Thus, we complete the proof of Theorem 2. \(\square \)
Proof of Theorem 3
First, we prove the consistency of \({\widehat{\sigma }}^{2}\). By the definition of \({\widehat{\sigma }}^{2}\), we have
where \(C_{n1}=n^{-1}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{Z}}_{n} (\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})\), \(C_{n2}=n^{-1}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})^{\textrm{T}}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n} (\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})\), \(B_{n3}=n^{-1}\Vert {\textbf{R}}_{n}\Vert ^{2}\), \(C_{n4}=n^{-1}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{D}}_{n} (\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})\), \(C_{n5}=n^{-1}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\), \(C_{n6}=n^{-1}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}{\textbf{Z}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\), \(C_{n7}=n^{-1}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})^{\textrm{T}}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\), \(C_{n8}=n^{-1}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})^{\textrm{T}}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\) and \(C_{n9}=n^{-1}{\textbf{R}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\).
By applying the law of large numbers for independent and identically distributed random variables, we have \(n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n} \overset{\text {P}}{\longrightarrow }{\sigma }_{0}^2\). Thus, to complete the proof of part (a), it suffices to show \(C_{nj}\overset{\text {P}}{\longrightarrow }0\) (\(j=1,\ldots ,9\)).
By Theorems 1 and 2 and their proofs, we have \(\Vert \widehat{\varvec{\rho }}-{\varvec{\rho }}_{0}\Vert ={{O}}_{p}(n^{-1/2})\), \(\Vert \widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0}\Vert ={{O}}_{P}({\sqrt{K/n}}+K^{-{\delta }})\), \(\Vert {\textbf{R}}_{n}\Vert ^{2}={{O}}(nK^{-2{\delta }})\) and \(n^{-1}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{Z}}_{n}={{O}}_{p}(1)\). Combining these results with \(\Vert n^{-1/2}{\varvec{\varepsilon }}_{n}\Vert =(n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}) ^{1/2}={{O}}_{p}(1)\), Assumption 4.4 and Cauchy–Schwarz inequality, we obtain
Next, we prove part (b) of Theorem 3. By Theorems 1 and 2, we have \(\widehat{\varvec{\rho }}\overset{\text {P}}{\longrightarrow }{\varvec{\rho }}_{0}\) and \({\widehat{\varvec{\beta }}}(u)\overset{\text {P}}{\longrightarrow } {\varvec{\beta }}_{0}(u)\). This together with part (a) yields \(\widehat{\varvec{\varSigma }}\overset{\text {P}}{\longrightarrow } {\varvec{\varSigma }}\).
Finally, we prove part (c) of Theorem 3. Let \(\widehat{{\varSigma }}_{ij}(u)\) and \({{\varSigma }}_{ij}(u)\) be (i, j)th elements of \(\widehat{\varvec{\varSigma }}(u)\) and \({\varvec{\varSigma }}(u)\), respectively. Then \(\widehat{{\varSigma }}_{ij}(u)={{\widehat{\sigma }}}^{2}{\textbf{e}}_{i}^{\textrm{T}}{\varvec{\varGamma }}(u)({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} {\varvec{\varGamma }}(u)^{\textrm{T}}{\textbf{e}}_{j}\) and \({{\varSigma }}_{ij}(u)={\sigma }_{0}^{2}{\textbf{e}}_{i}^{\textrm{T}}{\varvec{\varGamma }}(u)({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} {\varvec{\varGamma }}(u)^{\textrm{T}}{\textbf{e}}_{j}\), where \({\textbf{e}}_{i}\) is a \(q{\times }1\) vector with its ith element being 1 and other elements are 0, and \({\textbf{e}}_{j}\) is similarly defined. It follows from part (a) of Theorem 3 that \({\widehat{\sigma }}^{2}-{\sigma }_{0}^2={{o}}_{p}(1)\). This together with Assumptions 4.3–4.5 yields
where \(I(\cdot )\) is the indicator function. This shows that \(\widehat{\varvec{\varSigma }}(u)\) is a consistent estimator of \({\varvec{\varSigma }}(u)\). \(\square \)
Proof of Theorems 4 and 5
We only prove Theorem 5 because Theorem 4 is a special case of Theorem 5. By Theorem 1, it is easy to show
This implies that
where \({\varvec{\mu }}={\sqrt{n}}({\textbf{R}}{\varvec{\varSigma }}{\textbf{R}}^{\textrm{T}})^{-1/2} ({\textbf{R}}{\varvec{\rho }}_{0}-{\textbf{b}})\). This together with Theorem 3(b) and the Slutsky theorem yields
Thus, we have
where \({\lambda }=\lim _{n{\rightarrow }{\infty }}{\varvec{\mu }}^{\textrm{T}}{\varvec{\mu }} =\lim _{n{\rightarrow }{\infty }}n({\textbf{R}}{\varvec{\rho }}_{0}-{\textbf{b}})^{\textrm{T}} ({\textbf{R}}{\varvec{\varSigma }}{\textbf{R}}^{\textrm{T}})^{-1} ({\textbf{R}}{\varvec{\rho }}_{0}-{\textbf{b}})\). \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Li, T., Wang, Y. & Fang, K. A semiparametric dynamic higher-order spatial autoregressive model. Stat Papers 65, 1085–1123 (2024). https://doi.org/10.1007/s00362-023-01489-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-023-01489-y
Keywords
- Spatial dependence
- Higher-order spatial autoregressive models
- Sieve two-stage least squares method
- Generalized likelihood ratio statistic
- Bootstrap