Skip to main content
Log in

A semiparametric dynamic higher-order spatial autoregressive model

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

Conventional higher-order spatial autoregressive models assume that all regression coefficients are constant, which ignores dynamic feature that may exist in spatial data. In this paper, we introduce a semiparametric dynamic higher-order spatial autoregressive model by allowing regression coefficients in classical higher-order spatial autoregressive models to smoothly vary with a continuous explanatory variable, which enables us to explore dynamic feature in spatial data. We develop a sieve two-stage least squares method for the proposed model and derive asymptotic properties of resulting estimators. Furthermore, we develop two testing methods to check appropriateness of certain linear constraint condition on the spatial lag parameters and stationarity of the regression relationship, respectively. Simulation studies show that the proposed estimation and testing methods perform quite well in finite samples. The Boston house price data are finally analyzed to demonstrate the proposed model and its estimation and testing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Badinger H, Egger P (2011) Estimation of higher-order spatial autoregressive cross-section models with heteroscedastic disturbances. Pap Reg Sci 90:213–235

    Article  Google Scholar 

  • Badinger H, Egger P (2013) Estimation and testing of higher-order spatial autoregressive panel data error component models. J Geogr Syst 15:453–489

    Article  Google Scholar 

  • Cheng SL, Chen JB (2021) Estimation of partially linear single-index spatial autoregressive model. Stat Pap 62:495–531

    Article  MathSciNet  Google Scholar 

  • Du J, Sun XQ, Cao RY, Zhang ZZ (2018) Statistical inference for partially linear additive spatial autoregressive models. Spat Stat 25:52–67

    Article  MathSciNet  Google Scholar 

  • Fan JQ, Huang T (2005) Profile likelihood inferences on semiparametric varying-coefficient partially linear models. Bernoulli 11:1031–1057

    Article  MathSciNet  Google Scholar 

  • Fan JQ, Jiang JC (2007) Nonparametric inference with generalized likelihood ratio tests. Test 16:409–444

    Article  MathSciNet  Google Scholar 

  • Gilley OW, Pace RK (1996) On the Harrison and Rubinfeld data. J Environ Econ Manag 31:403–405

    Article  Google Scholar 

  • Guo JC, Qu X (2019) Spatial interactive effects on housing prices in Shanghai and Beijing. Reg Sci Urban Econ 76:147–160

    Article  Google Scholar 

  • Gupta A, Robinson PM (2015) Inference on higher-order spatial autoregressive models with increasingly many parameters. J Econ 186:19–31

    Article  MathSciNet  Google Scholar 

  • Gupta A, Robinson PM (2018) Pseudo maximum likelihood estimation of spatial autoregressive models with increasing dimension. J Econ 202:92–107

    Article  MathSciNet  Google Scholar 

  • Hall P, Hart JD (1990) Bootstrap test for difference between means in nonparametric regression. J Am Stat Assoc 412:1039–1049

    Article  MathSciNet  Google Scholar 

  • Han XY, Hsieh CS, Lee LF (2017) Estimation and model selection of higher-order spatial autoregressive model: an efficient Bayesian approach. Reg Sci Urban Econ 63:97–120

    Article  Google Scholar 

  • Härdle W, Mammen E (1993) Comparing nonparametric versus parametric regression fits. Ann Stat 21:1926–1947

    Article  MathSciNet  Google Scholar 

  • Harrison D, Rubinfeld DL (1978) Hedonic housing prices and the demand for clean air. J Environ Econ Manag 5:81–102

    Article  Google Scholar 

  • Kang XJ, Li TZ (2008) Testing a linear relationship in varying coefficient spatial autoregressive models. Commun Stat Simul Comput 47:187–205

    Article  MathSciNet  Google Scholar 

  • Kang XJ, Li TZ (2022) Estimation and testing of a higher-order partially linear spatial autoregressive model. J Stat Comput Simul 92:3167–3201

    Article  MathSciNet  Google Scholar 

  • Kelejian HH, Prucha IR (2010) Specification and estimation of spatial autoregressive models with autoregressive and heteroskedastic disturbances. J Econ 157:53–67

    Article  MathSciNet  Google Scholar 

  • Lee LF (2004) Asymptotic distributions of quasi-maximum likelihood estimators for spatial autoregressive models. Econometrica 72:1899–1925

    Article  MathSciNet  Google Scholar 

  • Lee LF (2007) GMM and 2SLS estimation of mixed regressive, spatial autoregressive models. J Econ 137:489–514

    Article  MathSciNet  Google Scholar 

  • Lee LF, Liu XD (2010) Efficient GMM estimation of high order spatial autoregressive models. Econ Theory 26:187–230

    Article  MathSciNet  Google Scholar 

  • Li KM, Chen JB (2013) Profile maximum likelihood estimation of semi-parametric varying coefficient spatial lag model. J Quant Tech Econ 30:85–98

    Google Scholar 

  • Li DK, Mei CL, Wang N (2019) Tests for spatial dependence and heterogeneity in spatially autoregressive varying coefficient models with application to Boston house price analysis. Reg Sci Urban Econ 79:103470

    Article  Google Scholar 

  • Liao J, Wen L, Yin JX (2021) Model selection and averaging for higher-order spatial autoregressive model. J Syst Sci Math Sci 41:1400–1417

    Google Scholar 

  • Lin X, Lee LF (2010) GMM estimation of spatial autoregressive models with unknown heteroskedasticity. J Econ 157:34–52

    Article  MathSciNet  Google Scholar 

  • Lin X, Weinberg B (2014) Unrequited friendship? how reciprocity mediates adolescent peer effects. Reg Sci Urban Econ 48:144–153

    Article  Google Scholar 

  • Luo GW, Wu MX (2021) Variable selection for semiparametric varying-coefficient spatial autoregressive models with a diverging number of parameters. Commun Stat Theory Methods 50:2062–2079

    Article  MathSciNet  Google Scholar 

  • Ma SJ, Yang LJ (2011) Spline-backfitted kernel smoothing of partially linear additive model. J Stat Plan Inference 141:204–219

    Article  MathSciNet  Google Scholar 

  • Newey WK (1997) Convergence rates and asymptotic normality for series estimators. J Econ 79:147–168

    Article  MathSciNet  Google Scholar 

  • Su LJ, Jin SN (2010) Profile quasi-maximum likelihood estimation of partially linear spatial autoregressive models. J Econ 157:18–33

    Article  MathSciNet  Google Scholar 

  • Sun Y, Yan HJ, Zhang WY, Lu ZD (2014) A semiparametric spatial dynamic model. Ann Stat 42:700–727

    Article  MathSciNet  Google Scholar 

  • Tao J (2005) Spatial econometrics: models, methods and applications. Phd thesis, Department of Economics, Ohio State University. https://www.docin.com/p-504667641.html

  • Wakefield J (2007) Disease mapping and spatial regression with count data. Biostatistics 8:158–183

    Article  PubMed  Google Scholar 

  • Xu GY, Bai Y (2021) Estimation of nonparametric additive models with high order spatial autoregressive errors. Can J Stat 49:311–343

    Article  MathSciNet  Google Scholar 

  • Yang ZL (2018) Bootstrap LM tests for higher-order spatial effects in spatial linear regression models. Empir Econ 55:35–68

    Article  Google Scholar 

  • Zhang R, Zhou J, Lan W, Wang HS (2022) A case study on the shareholder network effect of stock market data: an SARMA approach. Sci China Math 65:2219–2242

    Article  MathSciNet  CAS  Google Scholar 

  • Zhang YQ, Li H, Feng YQ (2023) Inference for partially linear additive higher-order spatial autoregressive model with spatial autoregressive error and unknown heteroskedasticity. Commun Stat Simul Comput 52:898–924

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to editor Werner G. M\({\ddot{\textrm{u}}}\)ller and reviewers for their constructive comments and suggestions, which lead to an improved version of this paper. This research was supported by the Natural Science Foundation of Shaanxi Province [grant 2021JM349] and the National Natural Science Foundation of China [grants 11972273 and 52170172].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tizheng Li.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this appendix, we give detailed technical proofs of Theorems 15 in Sects. 2 and 3. The following three facts are frequently used in our proofs.

Fact 1. If the row and column sums of \(n{\times }n\) matrices \({\textbf{A}}_{n1}\) and \({\textbf{A}}_{n2}\) are uniformly bounded in absolute value, then the row and column sums of \({\textbf{A}}_{n1}{\textbf{A}}_{n2}\) and \({\textbf{A}}_{n2}{\textbf{A}}_{n1}\) are also uniformly bounded in absolute value.

Fact 2. The largest eigenvalue of an idempotent matrix is at most one.

Fact 3. For any \(n{\times }n\) matrix \({\textbf{B}}_{n}\), its spectral radius is bounded by \({\textrm{max}}_{1{\le }i{\le }n} \sum _{j=1}^{n}|b_{n,ij}|\), where \(b_{n,ij}\) is the (ij)th element of \({\textbf{B}}_{n}\).

Proof of Theorem 1

By Eq. (5) and noticing \(({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{D}}_{n}={\textbf{0}}\), we obtain

$$\begin{aligned} \widehat{\varvec{\rho }}-{\varvec{\rho }}_{0}= & {} (\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n})^{-1} \widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Y}}_{n}-{\varvec{\rho }}_{0}\nonumber \\= & {} (\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n})^{-1} \widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})({\textbf{Z}}_{n}{\varvec{\rho }}_{0}+ {\textbf{D}}_{n}{\varvec{\gamma }}_{0}+{\textbf{R}}_{n}+{\varvec{\varepsilon }}_{n})-{\varvec{\rho }}_{0}\nonumber \\= & {} (\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n})^{-1} \widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{R}}_{n}+ (\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n})^{-1} \widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\varvec{\varepsilon }}_{n}, \end{aligned}$$
(A.1)

where \({\textbf{R}}_{n}={\textbf{M}}_{n}-{\textbf{D}}_{n}{\varvec{\gamma }}_{0}\) and \(\widetilde{\textbf{R}}_{n}=({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\).

First, we consider \(\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n}\). Let \({\overline{\varvec{\varepsilon }}}_{n}=({\textbf{G}}_{n1}{\varvec{\varepsilon }}_{n}, \ldots ,{\textbf{G}}_{nr}{\varvec{\varepsilon }}_{n})\), where \({\textbf{G}}_{nj}={\textbf{G}}_{nj}({\varvec{\rho }}_{0})\) (\(j=1,\ldots ,r\)). Then, \({\textbf{Z}}_{n}={\overline{\textbf{Z}}}_{n}+{\overline{\varvec{\varepsilon }}}_{n}\). Thus, \(\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n}\) can be decomposed into

$$\begin{aligned} \widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n}= & {} {\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{Z}}_{n}\\= & {} (\overline{\textbf{Z}}_{n}+\overline{\varvec{\varepsilon }}_{n})^{\textrm{T}} ({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}) (\overline{\textbf{Z}}_{n}+\overline{\varvec{\varepsilon }}_{n})\\= & {} \overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\textbf{Z}}_{n} +\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\varvec{\varepsilon }}_{n}+\\{} & {} \overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\varvec{\varepsilon }}_{n} +\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\textbf{Z}}_{n}\\= & {} n{\varvec{\varSigma }}_{n,1}+B_{n1}+B_{n2}+B_{n3}, \end{aligned}$$

where \(B_{n1}=\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\varvec{\varepsilon }}_{n}\), \(B_{n2}=\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\varvec{\varepsilon }}_{n}\) and \(B_{n3}=\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\textbf{Z}}_{n}\).

For \(i,j=1,\ldots ,r\), it follows from Assumption 1.3 and Fact 1 that the row sums of \({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}\) are uniformly bounded in absolute value. Hence, we obtain \({\eta }_{\max }({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}})={{O}}(1)\) by Fact 3. This together with Fact 2 yields

$$\begin{aligned}{} & {} {\textrm{E}}({\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{nj}{\varvec{\varepsilon }}_{n})\\{} & {} \quad ={\sigma }_{0}^{2}{\textrm{tr}}({\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{nj})\\{} & {} \quad ={\sigma }_{0}^{2}{\textrm{tr}}({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}))\\{} & {} \quad {\le }{\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}){\textrm{tr}}(({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}))\\{} & {} \quad {\le }{\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}){\textrm{tr}}({\textbf{A}}_{n})\\{} & {} \quad ={\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}){\textrm{tr}}({\textbf{V}}_{n}({\textbf{V}}_{n}^{\textrm{T}}{\textbf{V}}_{n})^{-1} {\textbf{V}}_{n}^{\textrm{T}})\\{} & {} \quad ={\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}){\textrm{tr}}({\textbf{I}}_{s})\\{} & {} \quad ={{o}}(1). \end{aligned}$$

Combining this with Markov’s inequality yields

$$\begin{aligned} {\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{nj}{\varvec{\varepsilon }}_{n} ={{O}}_{P}(1),\,\,i,j=1,\ldots ,r. \end{aligned}$$
(A.2)

Thus, we have \(B_{n1}={{O}}_{P}(1)\).

For \(i=1,\ldots ,r\), it follows from Assumption 1.3 and Facts 1 and 3 that \({\eta }_{\max }({\textbf{G}}_{ni}{\textbf{G}}_{ni}^{\textrm{T}})={{O}}(1)\). This together with Fact 2 and Assumption 3.4 yields

$$\begin{aligned}{} & {} {\textrm{E}}(\Vert \overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{ni}{\varvec{\varepsilon }}_{n}\Vert ^2)\\= & {} {\textrm{E}}({\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\textbf{Z}}_{n}\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{ni}{\varvec{\varepsilon }}_{n})\\= & {} {\sigma }_{0}^{2}{\textrm{tr}}({\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\textbf{Z}}_{n}\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{ni})\\\le & {} {\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{ni}{\textbf{G}}_{ni}^{\textrm{T}}) {\textrm{tr}}(\overline{\textbf{Z}}_{n}\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}))\\\le & {} {\sigma }_{0}^{2}{\overline{c}}_{Z}{\eta }_{\max }({\textbf{G}}_{ni}{\textbf{G}}_{ni}^{\textrm{T}}) {\textrm{tr}}({\textbf{A}}_{n})\\= & {} {{O}}(1). \end{aligned}$$

This means that

$$\begin{aligned} \overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{ni}{\varvec{\varepsilon }}_{n} ={{O}}_{P}(1),\,\,i=1,\ldots ,r. \end{aligned}$$
(A.3)

Therefore, we have \(B_{n2}={{O}}_{P}(1)\). Similarly, we have \(B_{n3}={{O}}_{P}(1)\). By combining the convergence rates of \(B_{n1}\), \(B_{n2}\) and \(B_{n3}\), we obtain

$$\begin{aligned} n^{-1}\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n} ={\varvec{\varSigma }}_{n,1}+{{O}}_{P}(n^{-1}) ={\varvec{\varSigma }}_{n,1}+{{o}}_{P}(1). \end{aligned}$$
(A.4)

Next, we consider \(\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{R}}_{n}\). Obviously,

$$\begin{aligned}\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{R}}_{n}= & {} {\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\\= & {} \overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}+ \overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\\= & {} B_{n4}+B_{n5}, \end{aligned}$$

where \(B_{n4}=\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\) and \(B_{n5}=\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\). We first show that \(B_{n4}={{O}}({\sqrt{n}}K^{-{\delta }})\). Let \({{R}}_{n,i}\) be the ith element of \({\textbf{R}}_{n}\), then \({{R}}_{n,i}=\sum _{j=1}^{q}x_{n,ij}[{\beta }_{0j}({U}_{n,i})-{\textbf{B}}({U}_{n,i})^{\textrm{T}} {\varvec{\gamma }}_{j0}]\). It follows from Assumption 3.1 that there exists a constant \(c_{X}>0\) such that \(\max _{1{\le }i{\le }n,1{\le }j{\le }q}|x_{n,ij}|{\le }c_{X}\) for all \(n{\ge }1\). This together with Assumption 4.1 yields

$$\begin{aligned}|{{R}}_{n,i}|= & {} |\sum _{j=1}^{q}x_{n,ij}[{\beta }_{0j}({U}_{n,i})-{\textbf{B}}({U}_{n,i})^{\textrm{T}} {\varvec{\gamma }}_{j0}]|\\{} & {} \quad {\le }\sum _{j=1}^{q}|x_{n,ij}|{\cdot }|{\beta }_{0j}({U}_{n,i})-{\textbf{B}}({U}_{n,i})^{\textrm{T}} {\varvec{\gamma }}_{j0}|\\{} & {} \quad {\le } qc_{X}K^{-{\delta }}. \end{aligned}$$

This yields

$$\begin{aligned} \Vert {\textbf{R}}_{n}\Vert ^2=\sum _{i=1}^{n}{{R}}_{n,i}^{2} {\le }n{\cdot }\max _{1{\le }i{\le }n}{{R}}_{n,i}^{2} {\le }n{\cdot }\left( \max _{1{\le }i{\le }n}|{{R}}_{n,i}|\right) ^{2} ={{O}}(nK^{-2{\delta }}). \end{aligned}$$
(A.5)

Similar to the proof of \(B_{n2}\), we have

$$\begin{aligned}\Vert B_{n4}\Vert ^2= & {} {\textbf{R}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n})\overline{\textbf{Z}}_{n}\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\\\le & {} {\overline{c}}_{Z}{\textbf{R}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\\\le & {} {\overline{c}}_{Z}{\textbf{R}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\\= & {} {{O}}(nK^{-2{\delta }}). \end{aligned}$$

Thus, we have \(B_{n4}={{O}}({\sqrt{n}}K^{-{\delta }})\). For \(i=1,\ldots ,r\), similar to the proof of (A.2), we have

$$\begin{aligned}{} & {} {\textrm{E}}(\Vert {\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}\Vert ^2)\\= & {} {\textrm{E}}({\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}{\textbf{R}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{ni}{\varvec{\varepsilon }}_{n})\\= & {} {\sigma }_{0}^{2}{\textrm{tr}}({\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n}{\textbf{R}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{ni})\\\le & {} {\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{ni}{\textbf{G}}_{ni}^{\textrm{T}}) {\textrm{tr}}({\textbf{R}}_{n}{\textbf{R}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}))\\\le & {} {\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{ni}{\textbf{G}}_{ni}^{\textrm{T}}) {\textrm{tr}}({\textbf{R}}_{n}{\textbf{R}}_{n}^{\textrm{T}})\\= & {} {{O}}(nK^{-2{\delta }}). \end{aligned}$$

This implies that

$$\begin{aligned} {\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{R}}_{n} ={{O}}_{P}({\sqrt{n}}K^{-{\delta }}),\,i=1,\ldots ,r. \end{aligned}$$

Therefore, we have \(B_{n5}={{O}}_{P}({\sqrt{n}}K^{-{\delta }})\). By combining the convergence rates of \(B_{n4}\) and \(B_{n5}\), we obtain

$$\begin{aligned} \widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{R}}_{n} ={{O}}_{P}({\sqrt{n}}K^{-{\delta }}). \end{aligned}$$
(A.6)

Last, we consider \(\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\varvec{\varepsilon }}_{n}\). It directly follows from (A.2) that

$$\begin{aligned} \Vert {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{ni}{\varvec{\varepsilon }}_{n}\Vert ^{2} ={{O}}_{P}(1),\,i=1,\ldots ,r. \end{aligned}$$
(A.7)

By an analogous proof to that of (A.2), we can show that

$$\begin{aligned} \Vert {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}\Vert ^{2} ={{O}}_{P}(1). \end{aligned}$$
(A.8)

By combining (A.7), (A.8) and Cauchy–Schwarz inequality, we obtain

$$\begin{aligned}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}\le & {} \Vert {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\textbf{G}}_{ni}{\varvec{\varepsilon }}_{n}\Vert {\cdot } \Vert {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}\Vert \\= & {} {{O}}_{P}(1),\,i=1,\ldots ,r. \end{aligned}$$

This implies that \(\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}={{O}}_{P}(1)\). Thus, we have

$$\begin{aligned} \widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\varvec{\varepsilon }}_{n}= & {} {\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}\nonumber \\= & {} \overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}+ \overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}\nonumber \\= & {} \overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}+{{O}}_{P}(1). \end{aligned}$$
(A.9)

By combining (A.4), (A.6) and (A.9), we obtain

$$\begin{aligned}{} & {} \sqrt{n}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})\\= & {} (n^{-1}\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{Z}}_{n})^{-1} (n^{-1/2}\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\varvec{\varepsilon }}_{n} +n^{-1/2}\widetilde{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{A}}_{n}\widetilde{\textbf{R}}_{n})\\= & {} [{\varvec{\varSigma }}_{n,1}+{{o}}_{P}(1)]^{-1} [n^{-1/2}\overline{\textbf{Z}}_{n}^{\textrm{T}}({\textbf{I}}_{n}-{\textbf{P}}_{n}) {\textbf{A}}_{n}({\textbf{I}}_{n}-{\textbf{P}}_{n}){\varvec{\varepsilon }}_{n}+{{o}}_{P}(1)]. \end{aligned}$$

Invoking the central limit theorem and Slutsky’s Lemma, we have

$$\begin{aligned} {\sqrt{n}}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0}) \overset{\text {D}}{\longrightarrow }N({\textbf{0}},{\varvec{\varSigma }}). \end{aligned}$$

Thus, we complete the proof of Theorem 1. \(\square \)

Proof of Theorem 2

First, we consider the convergence rate of \({\widehat{\varvec{\gamma }}}\). By simple calculation, we have

$$\begin{aligned}{\widehat{\varvec{\gamma }}}-{\varvec{\gamma }}_{0}= & {} ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} {\textbf{D}}_{n}^{\textrm{T}}({\textbf{Y}}_{n}-{\textbf{Z}}_{n}\widehat{\varvec{\rho }}) -{\varvec{\gamma }}_{0}\\= & {} ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} {\textbf{D}}_{n}^{\textrm{T}}({\textbf{Z}}_{n}{\varvec{\rho }}_{0}+ {\textbf{D}}_{n}{\varvec{\gamma }}_{0}+{\textbf{R}}_{n}+{\varvec{\varepsilon }}_{n} -{\textbf{Z}}_{n}\widehat{\varvec{\rho }})-{\varvec{\gamma }}_{0}\\= & {} ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{Z}}_{n} ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }}) +({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{R}}_{n} +({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\\= & {} B_{n6}+B_{n7}+B_{n8}, \end{aligned}$$

where \(B_{n6}=({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{Z}}_{n} ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\), \(B_{n7}=({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\) and \(B_{n8}=({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\).

By Assumption 4.4, Fact 2 and \({\textbf{Z}}_{n}=\overline{\textbf{Z}}_{n}+\overline{\varvec{\varepsilon }}_{n}\), we have

$$\begin{aligned}\Vert B_{n6}\Vert ^{2}= & {} ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} {\textbf{Z}}_{n}^{\textrm{T}}{\textbf{D}}_{n}({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{Z}}_{n} ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\\\le & {} n^{-1}{\eta }_{\min }^{-1}(n^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} {\textbf{Z}}_{n}^{\textrm{T}}{\textbf{D}}_{n}({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} {\textbf{D}}_{n}^{\textrm{T}}{\textbf{Z}}_{n}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\\\le & {} {\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{Z}}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\\= & {} {\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}}\overline{\textbf{Z}}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }}) +{\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}\overline{\varvec{\varepsilon }}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\\{} & {} \,+\, 2{\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}}\overline{\varvec{\varepsilon }}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\\= & {} B_{n61}+B_{n62}+B_{n63}, \end{aligned}$$

where \(B_{n61}={\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}}\overline{\textbf{Z}}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\), \(B_{n62}={\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}\overline{\varvec{\varepsilon }}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\) and \(B_{n63}=2{\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}}\overline{\varvec{\varepsilon }}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\).

From Theorem 1, we obtain \(\Vert \widehat{\varvec{\rho }}-{\varvec{\rho }}_{0}\Vert ^{2}= {{O}}_{P}(n^{-1})\). This together with Assumption 3.4 yields

$$\begin{aligned}B_{n61}= & {} {\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}}\overline{\textbf{Z}}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\\\le & {} {\underline{c}}_{D}^{-1}{\eta }_{\max }(n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}} \overline{\textbf{Z}}_{n})\Vert \widehat{\varvec{\rho }}-{\varvec{\rho }}_{0}\Vert ^{2}\\= & {} {{O}}_{P}(n^{-1}). \end{aligned}$$

For \(i,j=1,\ldots ,r\), it follows from Assumption 1.3 and Facts 1 and 3 that \( {\textrm{E}}(n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}{\textbf{G}}_{nj}{\varvec{\varepsilon }}_{n}) =n^{-1}{\sigma }_{0}^{2}{\textrm{tr}}({\textbf{G}}_{ni}^{\textrm{T}}{\textbf{G}}_{nj}) =n^{-1}{\sigma }_{0}^{2}{\textrm{tr}}({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}) {\le }{\sigma }_{0}^{2}{\eta }_{\max }({\textbf{G}}_{nj}{\textbf{G}}_{ni}^{\textrm{T}}) ={{O}}(1). \) This implies that \(n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{G}}_{ni}^{\textrm{T}}{\textbf{G}}_{nj}{\varvec{\varepsilon }}_{n} ={\textrm{O}}_{P}(1)\). Thus, we have

$$\begin{aligned}B_{n62}={\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\varvec{\varepsilon }}_{n}^{\textrm{T}}\overline{\varvec{\varepsilon }}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})={{O}}_{P}(n^{-1}). \end{aligned}$$

By using Cauchy–Schwarz inequality and the orders of \(B_{n61}\) and \(B_{n62}\), we have

$$\begin{aligned}B_{n63}= & {} 2{\underline{c}}_{D}^{-1}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} (n^{-1}\overline{\textbf{Z}}_{n}^{\textrm{T}}\overline{\varvec{\varepsilon }}_{n}) ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\\\le & {} 2\left| \left[ {\underline{c}}_{D}^{-1/2}n^{-1/2}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})^{\textrm{T}} \overline{\textbf{D}}_{n}^{\textrm{T}}\right] {\cdot }\left[ {\underline{c}}_{D}^{-1/2}n^{-1/2}\overline{\varvec{\varepsilon }}_{n} ({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})\right] \right| \\\le & {} 2(B_{n61}B_{n62})^{1/2}\\= & {} {{O}}_{P}(n^{-1}). \end{aligned}$$

Combining the orders of \(B_{n61}\), \(B_{n62}\) and \(B_{n63}\), we obtain \(\Vert B_{n6}\Vert ={{O}}_{P}(n^{-1/2})\).

By Assumption 4.4, Fact 2 and (A.5), we obtain

$$\begin{aligned}\Vert B_{n7}\Vert ^{2}= & {} {\textbf{R}}_{n}^{\textrm{T}}{\textbf{D}}_{n}({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\\\le & {} n^{-1}{\eta }_{\min }^{-1}(n^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n}){\textbf{R}}_{n}^{\textrm{T}} {\textbf{D}}_{n}({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\\\le & {} n^{-1}{\underline{c}}_{D}^{-1}\Vert {\textbf{R}}_{n}\Vert ^{2}\\= & {} {{O}}(K^{-2{\delta }}). \end{aligned}$$

This means that \(\Vert B_{n7}\Vert ={{O}}(K^{-{\delta }})\).

It follows from Assumption 4.4 that

$$\begin{aligned}{\textrm{E}}(\Vert B_{n8}\Vert ^{2})= & {} {\textrm{E}}({\varvec{\varepsilon }}_{n}^{\textrm{T}}{\textbf{D}}_{n}({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n})\\\le & {} n^{-1}{\eta }_{\min }^{-1}(n^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n}) {\textrm{E}}({\varvec{\varepsilon }}_{n}^{\textrm{T}} {\textbf{D}}_{n}({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n})\\= & {} n^{-1}{\underline{c}}_{D}^{-1}{\sigma }_{0}^{2} {\textrm{tr}}({\textbf{D}}_{n}({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}})\\= & {} {{O}}(K/n). \end{aligned}$$

This implies that \(\Vert B_{n8}\Vert ={{O}}_{P}({\sqrt{K/n}})\). By triangle inequality and the orders of \(B_{n6}\), \(B_{n7}\) and \(B_{n8}\), we have

$$\begin{aligned} \Vert {\widehat{\varvec{\gamma }}}-{\varvec{\gamma }}_{0}\Vert {\le }\Vert B_{n6}\Vert +\Vert B_{n7}\Vert +\Vert B_{n8}\Vert ={{O}}_{P}({\sqrt{K/n}}+K^{-{\delta }}). \end{aligned}$$

Next, we consider the uniform convergence rate of \({\widehat{\varvec{\beta }}}(u)\). By the definition of \({\widehat{\varvec{\beta }}}(u)\), convergence rate of \({\widehat{\varvec{\gamma }}}\), and Assumptions 4.1 and 4.3, we obtain

$$\begin{aligned}{} & {} {\sup }_{{{u}}{\in }{{\mathcal {U}}}}|{\widehat{{\beta }}}_{j}(u)-{{\beta }}_{0j}(u)|\\{} & {} \quad ={\sup }_{{{u}}{\in }{{\mathcal {U}}}}|{\textbf{B}}(u)^{\textrm{T}}{\widehat{\varvec{\gamma }}}_{j} -{{\beta }}_{0j}(u)|\\{} & {} \quad ={\sup }_{{{u}}{\in }{{\mathcal {U}}}}|{\textbf{B}}(u)^{\textrm{T}}({\widehat{\varvec{\gamma }}}_{j}- {{\varvec{\gamma }}}_{j0})+{\textbf{B}}(u)^{\textrm{T}}{{\varvec{\gamma }}}_{0j} -{{\beta }}_{0j}(u)|\\{} & {} \quad {\le } {\sup }_{{{u}}{\in }{{\mathcal {U}}}}|{\textbf{B}}(u)^{\textrm{T}}({\widehat{\varvec{\gamma }}}_{j}- {{\varvec{\gamma }}}_{j0})| +{\sup }_{{{u}}{\in }{{\mathcal {U}}}}|{{\beta }}_{0j}(u)-{\textbf{B}}(u)^{\textrm{T}}{{\varvec{\gamma }}}_{j0}|\\{} & {} \quad {\le } \Vert {\widehat{\varvec{\gamma }}}- {{\varvec{\gamma }}}_{0}\Vert {\cdot }{\sup }_{{{u}}{\in }{{\mathcal {U}}}}\Vert {\textbf{B}}({\textbf{u}})\Vert + {\sup }_{{{u}}{\in }{{\mathcal {U}}}}|{{\beta }}_{0j}(u)-{\textbf{B}}(u)^{\textrm{T}}{{\varvec{\gamma }}}_{j0}|\\{} & {} \quad {\le } {\zeta }(K)\Vert {\widehat{\varvec{\gamma }}}- {{\varvec{\gamma }}}_{0}\Vert +{{O}}(K^{-{\delta }})\\{} & {} \quad ={{O}}_{P}({\zeta }(K)({\sqrt{K/n}}+K^{-{\delta }})). \end{aligned}$$

This yields

$$\begin{aligned} {\sup }_{{{u}}{\in }{{\mathcal {U}}}}\Vert {\widehat{\varvec{\beta }}}(u)-{\varvec{\beta }}_{0}(u)\Vert ={{O}}_{P}({\zeta }(K)({\sqrt{K/n}}+K^{-{\delta }})). \end{aligned}$$
(A.10)

Finally, we consider the limiting distribution of \({\widehat{\varvec{\beta }}}(u)\). By the definition of \({\widehat{\varvec{\beta }}}(u)\) and Assumption 4.1, we have

$$\begin{aligned}{\widehat{\varvec{\beta }}}(u)-{\varvec{\beta }}_{0}(u)= & {} {\varvec{\varGamma }}(u)({\widehat{\varvec{\gamma }}}- {{\varvec{\gamma }}}_{0})+{\varvec{\varGamma }}(u){{\varvec{\gamma }}}_{0} -{\varvec{\beta }}_{0}(u)\\= & {} {\varvec{\varGamma }}(u)(B_{n6}+B_{n7}+B_{n8}) +{{O}}(K^{-{\delta }})\\= & {} {\varvec{\varGamma }}(u)(B_{n6}+B_{n7}+B_{n8}) +{{o}}(1). \end{aligned}$$

By combining the convergence rates of \(B_{n6}\) and \(B_{n7}\) and Assumption 4.3, we have

$$\begin{aligned} {\varvec{\varGamma }}(u)B_{n6}={{O}}_{P}(n^{-1/2}{\zeta }(K))\,\,\, {\textrm{and}}\,\,\, {\varvec{\varGamma }}(u)B_{n7}={{O}}({\zeta }(K)K^{-{\delta }}). \end{aligned}$$

It follows from Assumption 4.5 and \({\zeta }(K){\rightarrow }{\infty }\) as \(n{\rightarrow }{\infty }\) that \(n^{-1/2}{\zeta }(K)={{o}}(1)\). This together with \({\sqrt{n}}K^{-{\delta }}={\textrm{o}}(1)\) yields \({\zeta }(K)K^{-{\delta }}=(n^{-1/2}{\zeta }(K))({\sqrt{n}}K^{-{\delta }}) ={{o}}(1)\). Thus, we have

$$\begin{aligned} {\widehat{\varvec{\beta }}}(u)-{\varvec{\beta }}_{0}(u)={\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n} +{{o}}_{P}(1). \end{aligned}$$

According to the Cram\(\mathrm {\acute{e}}\)r-Wold device, it is sufficient to prove

$$\begin{aligned} {\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u)({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\overset{\text {D}}{\longrightarrow } N({{0}},{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}) \end{aligned}$$

for any nonzero \(q{\times }1\) vector of constants \({\textbf{c}}\). Let \({\textbf{d}}_{n,i}=({\textbf{I}}_{q}{\otimes }{\textbf{B}}(U_{n,i})){\textbf{X}}_{n,i}\) (\(i=1,\ldots ,n\)), then

$$\begin{aligned}{}[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2}{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n} =\sum _{i=1}^{n}{\xi }_{n,i}, \end{aligned}$$

where \({\xi }_{n,i}=[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2}{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{d}}_{n,i}{\varepsilon }_{n,i}\). It follows from Assumptions 3.1 and 4.3 that

$$\begin{aligned} \Vert {\textbf{d}}_{n,i}\Vert ^{2}=\sum _{j=1}^{q}x_{n,ij}^{2}\Vert {\textbf{B}}(U_{n,i})\Vert ^{2}{\,} {\le }\left( \sup _{u{\in }{{\mathcal {U}}}}\Vert {\textbf{B}}(u)\Vert \right) ^{2}{\,}qc_{X}^{2}. \end{aligned}$$

This together with Assumptions 4.3 and 4.4 yields

$$\begin{aligned}{} & {} \max _{1{\le }i{\le }n}|[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2}{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{d}}_{n,i}|\\{} & {} \quad \le [{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2}{\cdot }\Vert {\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}\Vert {\cdot }\max _{1{\le }i{\le }n}\Vert {\textbf{d}}_{n,i}\Vert \\{} & {} \quad \le {\sqrt{q}}c_{X}{\zeta }(K)[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2} [{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u)({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-2}{\varvec{\varGamma }}(u)^{\textrm{T}}{\textbf{c}}]^{1/2}\\{} & {} \quad \le {\sqrt{q}}c_{X}{\zeta }(K){\underline{c}}_{D}^{-1/2}n^{-1/2}[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2} [{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u)({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\varvec{\varGamma }}(u)^{\textrm{T}}{\textbf{c}}]^{1/2}\\{} & {} \quad ={\sqrt{q}}c_{X}{\underline{c}}_{D}^{-1/2}{\sigma }_{0}^{-1}{\zeta }(K)n^{-1/2}. \end{aligned}$$

This together with Assumptions 2 and 4.5 yields

$$\begin{aligned}\sum _{i=1}^{n}{\textrm{E}}|{\xi }_{n,i}|^{3}= & {} \sum _{i=1}^{n}|[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2}{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{d}}_{n,i}|^{3} {\textrm{E}}|{\varepsilon }_{n,i}|^{3}\\\le & {} Cn{\cdot }\max _{1{\le }i{\le }n}|[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2}{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{d}}_{n,i}|^{3}\\= & {} {{O}}(n^{-1/2}{\zeta }(K)^{3})\\= & {} {{o}}(1). \end{aligned}$$

Combining this result with Lyapunov central limit theorem, we obtain

$$\begin{aligned}{}[{\textbf{c}}^{\textrm{T}}{\varvec{\varSigma }}(u){\textbf{c}}]^{-1/2}{\textbf{c}}^{\textrm{T}}{\varvec{\varGamma }}(u) ({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n} \overset{\text {D}}{\longrightarrow }N(0,1). \end{aligned}$$

Thus, we complete the proof of Theorem 2. \(\square \)

Proof of Theorem 3

First, we prove the consistency of \({\widehat{\sigma }}^{2}\). By the definition of \({\widehat{\sigma }}^{2}\), we have

$$\begin{aligned}{\widehat{\sigma }}^{2}= & {} n^{-1}\Vert {\textbf{Y}}_{n}-{\textbf{Z}}_{n}\widehat{\varvec{\rho }} -{\textbf{D}}_{n}\widehat{\varvec{\gamma }}\Vert ^{2}\\= & {} n^{-1}\Vert {\textbf{Z}}_{n}{\varvec{\rho }}_{0}+{\textbf{R}}_{n}+ {\textbf{D}}_{n}{\varvec{\gamma }}_{0}+{\varvec{\varepsilon }}_{n} -{\textbf{Z}}_{n}\widehat{\varvec{\rho }} -{\textbf{D}}_{n}\widehat{\varvec{\gamma }}\Vert ^{2}\\= & {} n^{-1}\Vert {\textbf{Z}}_{n}({\varvec{\rho }}_{0}-\widehat{\varvec{\rho }})+ {\textbf{D}}_{n}({\varvec{\gamma }}_{0}-\widehat{\varvec{\gamma }})+{\textbf{R}}_{n}+ {\varvec{\varepsilon }}_{n}\Vert ^{2}\\= & {} n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n} +C_{n1}+C_{n2}+C_{n3}+C_{n4}-2C_{n5}-2C_{n6}-2C_{n7}-2C_{n8}+2C_{n9}, \end{aligned}$$

where \(C_{n1}=n^{-1}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{Z}}_{n} (\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})\), \(C_{n2}=n^{-1}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})^{\textrm{T}}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n} (\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})\), \(B_{n3}=n^{-1}\Vert {\textbf{R}}_{n}\Vert ^{2}\), \(C_{n4}=n^{-1}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{D}}_{n} (\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})\), \(C_{n5}=n^{-1}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\), \(C_{n6}=n^{-1}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}{\textbf{Z}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\), \(C_{n7}=n^{-1}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})^{\textrm{T}}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{R}}_{n}\), \(C_{n8}=n^{-1}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})^{\textrm{T}}{\textbf{D}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\) and \(C_{n9}=n^{-1}{\textbf{R}}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}\).

By applying the law of large numbers for independent and identically distributed random variables, we have \(n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n} \overset{\text {P}}{\longrightarrow }{\sigma }_{0}^2\). Thus, to complete the proof of part (a), it suffices to show \(C_{nj}\overset{\text {P}}{\longrightarrow }0\) (\(j=1,\ldots ,9\)).

By Theorems 1 and 2 and their proofs, we have \(\Vert \widehat{\varvec{\rho }}-{\varvec{\rho }}_{0}\Vert ={{O}}_{p}(n^{-1/2})\), \(\Vert \widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0}\Vert ={{O}}_{P}({\sqrt{K/n}}+K^{-{\delta }})\), \(\Vert {\textbf{R}}_{n}\Vert ^{2}={{O}}(nK^{-2{\delta }})\) and \(n^{-1}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{Z}}_{n}={{O}}_{p}(1)\). Combining these results with \(\Vert n^{-1/2}{\varvec{\varepsilon }}_{n}\Vert =(n^{-1}{\varvec{\varepsilon }}_{n}^{\textrm{T}}{\varvec{\varepsilon }}_{n}) ^{1/2}={{O}}_{p}(1)\), Assumption 4.4 and Cauchy–Schwarz inequality, we obtain

$$\begin{aligned}C_{n1}= & {} (\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})^{\textrm{T}}(n^{-1}{\textbf{Z}}_{n}^{\textrm{T}}{\textbf{Z}}_{n}) (\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0}) ={{O}}_{p}(n^{-1})={{o}}_{p}(1),\\ C_{n2}\le & {} {\overline{c}}_{D}\Vert \widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0}\Vert ^{2} ={{O}}_{p}(K/n+K^{-2{\delta }})={{o}}_{p}(1),\\ C_{n3}= & {} n^{-1}\Vert {\textbf{R}}_{n}\Vert ^{2} ={{O}}(K^{-2{\delta }})={{o}}(1),\\ C_{n4}\le & {} \Vert n^{-1/2}{\textbf{Z}}_{n}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})\Vert {\cdot } \Vert n^{-1/2}{\textbf{D}}_{n}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})\Vert ={{o}}_{p}(1),\\ C_{n5}\le & {} \Vert n^{-1/2}{\textbf{Z}}_{n}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})\Vert {\cdot } \Vert n^{-1/2}{\textbf{R}}_{n}\Vert ={{O}}_{p}(n^{-1/2}K^{-{\delta }})={{o}}_{p}(1),\\ C_{n6}\le & {} \Vert n^{-1/2}{\textbf{Z}}_{n}(\widehat{\varvec{\rho }}-{\varvec{\rho }}_{0})\Vert {\cdot } \Vert n^{-1/2}{\varvec{\varepsilon }}_{n}\Vert ={{O}}_{p}(n^{-1/2})={\textrm{o}}_{p}(1),\\ C_{n7}\le & {} \Vert n^{-1/2}{\textbf{D}}_{n}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})\Vert {\cdot } \Vert n^{-1/2}{\textbf{R}}_{n}\Vert ={{O}}_{p}(K^{-{\delta }}({\sqrt{K/n}}+K^{-{\delta }}))={{o}}_{p}(1),\\ C_{n8}\le & {} \Vert n^{-1/2}{\textbf{D}}_{n}(\widehat{\varvec{\gamma }}-{\varvec{\gamma }}_{0})\Vert {\cdot } \Vert n^{-1/2}{\varvec{\varepsilon }}_{n}\Vert ={{O}}_{p}({\sqrt{K/n}}+K^{-{\delta }})={{o}}_{p}(1),\\ C_{n9}\le & {} \Vert n^{-1/2}{\textbf{R}}_{n}\Vert {\cdot } \Vert n^{-1/2}{\varvec{\varepsilon }}_{n}\Vert ={{O}}_{p}(K^{-{\delta }})={{o}}_{p}(1). \end{aligned}$$

Next, we prove part (b) of Theorem 3. By Theorems 1 and 2, we have \(\widehat{\varvec{\rho }}\overset{\text {P}}{\longrightarrow }{\varvec{\rho }}_{0}\) and \({\widehat{\varvec{\beta }}}(u)\overset{\text {P}}{\longrightarrow } {\varvec{\beta }}_{0}(u)\). This together with part (a) yields \(\widehat{\varvec{\varSigma }}\overset{\text {P}}{\longrightarrow } {\varvec{\varSigma }}\).

Finally, we prove part (c) of Theorem 3. Let \(\widehat{{\varSigma }}_{ij}(u)\) and \({{\varSigma }}_{ij}(u)\) be (ij)th elements of \(\widehat{\varvec{\varSigma }}(u)\) and \({\varvec{\varSigma }}(u)\), respectively. Then \(\widehat{{\varSigma }}_{ij}(u)={{\widehat{\sigma }}}^{2}{\textbf{e}}_{i}^{\textrm{T}}{\varvec{\varGamma }}(u)({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} {\varvec{\varGamma }}(u)^{\textrm{T}}{\textbf{e}}_{j}\) and \({{\varSigma }}_{ij}(u)={\sigma }_{0}^{2}{\textbf{e}}_{i}^{\textrm{T}}{\varvec{\varGamma }}(u)({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} {\varvec{\varGamma }}(u)^{\textrm{T}}{\textbf{e}}_{j}\), where \({\textbf{e}}_{i}\) is a \(q{\times }1\) vector with its ith element being 1 and other elements are 0, and \({\textbf{e}}_{j}\) is similarly defined. It follows from part (a) of Theorem 3 that \({\widehat{\sigma }}^{2}-{\sigma }_{0}^2={{o}}_{p}(1)\). This together with Assumptions 4.3–4.5 yields

$$\begin{aligned}\widehat{{\varSigma }}_{ij}(u)-{{\varSigma }}_{ij}(u)= & {} ({\widehat{\sigma }}^{2}-{\sigma }_{0}^{2})({\textbf{B}}(u)^{\textrm{T}}{\otimes }{\textbf{e}}_{i}^{\textrm{T}})({\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} ({\textbf{e}}_{j}{\otimes }{\textbf{B}}(u))\\= & {} {\textrm{o}}_{p}(1)n^{-1}({\textbf{B}}(u)^{\textrm{T}}{\otimes }{\textbf{e}}_{i}^{\textrm{T}})(n^{-1}{\textbf{D}}_{n}^{\textrm{T}}{\textbf{D}}_{n})^{-1} ({\textbf{e}}_{j}{\otimes }{\textbf{B}}(u))\\\le & {} {\underline{c}}_{D}^{-1}{\textrm{o}}_{p}(1)n^{-1}\Vert {\textbf{B}}(u)\Vert ^{2}I(i=j)\\\le & {} {\textrm{o}}_{p}({\zeta }(K)^{2}/n)I(i=j)\\= & {} {\textrm{o}}_{p}(({\zeta }(K)^{2}K/n){K}^{-1})I(i=j)\\= & {} {\textrm{o}}_{p}(1), \end{aligned}$$

where \(I(\cdot )\) is the indicator function. This shows that \(\widehat{\varvec{\varSigma }}(u)\) is a consistent estimator of \({\varvec{\varSigma }}(u)\). \(\square \)

Proof of Theorems 4 and 5

We only prove Theorem 5 because Theorem 4 is a special case of Theorem 5. By Theorem 1, it is easy to show

$$\begin{aligned} {\sqrt{n}}({\textbf{R}}\widehat{\varvec{\rho }}-{\textbf{b}}) \overset{\text {D}}{\longrightarrow }N({\sqrt{n}}({\textbf{R}}{\varvec{\rho }}_{0}-{\textbf{b}}), {\textbf{R}}{\varvec{\varSigma }}{\textbf{R}}^{\textrm{T}}). \end{aligned}$$

This implies that

$$\begin{aligned} {\sqrt{n}}({\textbf{R}}{\varvec{\varSigma }}{\textbf{R}}^{\textrm{T}})^{-1/2}({\textbf{R}}\widehat{\varvec{\rho }}-{\textbf{b}}) \overset{\text {D}}{\longrightarrow }N({\varvec{\mu }},{\textbf{I}}_{d}), \end{aligned}$$

where \({\varvec{\mu }}={\sqrt{n}}({\textbf{R}}{\varvec{\varSigma }}{\textbf{R}}^{\textrm{T}})^{-1/2} ({\textbf{R}}{\varvec{\rho }}_{0}-{\textbf{b}})\). This together with Theorem 3(b) and the Slutsky theorem yields

$$\begin{aligned} {\sqrt{n}}({\textbf{R}}\widehat{\varvec{\varSigma }}{\textbf{R}}^{\textrm{T}})^{-1/2} ({\textbf{R}}\widehat{\varvec{\rho }}-{\textbf{b}}) \overset{\text {D}}{\longrightarrow }N({\varvec{\mu }},{\textbf{I}}_{d}). \end{aligned}$$

Thus, we have

$$\begin{aligned} T_{1}=n({\textbf{R}}\widehat{\varvec{\rho }}-{\textbf{b}})^{\textrm{T}} ({\textbf{R}}\widehat{\varvec{\varSigma }}{\textbf{R}}^{\textrm{T}})^{-1} ({\textbf{R}}\widehat{\varvec{\rho }}-{\textbf{b}}) \overset{\text {D}}{\longrightarrow }{\chi }_{d}^{2}(\lambda ), \end{aligned}$$

where \({\lambda }=\lim _{n{\rightarrow }{\infty }}{\varvec{\mu }}^{\textrm{T}}{\varvec{\mu }} =\lim _{n{\rightarrow }{\infty }}n({\textbf{R}}{\varvec{\rho }}_{0}-{\textbf{b}})^{\textrm{T}} ({\textbf{R}}{\varvec{\varSigma }}{\textbf{R}}^{\textrm{T}})^{-1} ({\textbf{R}}{\varvec{\rho }}_{0}-{\textbf{b}})\). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, T., Wang, Y. & Fang, K. A semiparametric dynamic higher-order spatial autoregressive model. Stat Papers 65, 1085–1123 (2024). https://doi.org/10.1007/s00362-023-01489-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-023-01489-y

Keywords

Mathematics Subject Classification

Navigation