Skip to main content
Log in

Estimation of Panel Model with Spatial Autoregressive Error and Common Factors

  • Published:
Computational Economics Aims and scope Submit manuscript

Abstract

This study explores the estimation of a panel model that combines multifactor error with spatial correlation. On the basis of common correlated effects pooled (CCEP) estimator (Pesaran in Econometrica 74:967–1012, 2006), the generalized moments (GM) procedure suggested by Kelejian and Prucha (Int Econ Rev 40:509–533, 1999) is employed to estimate the spatial autoregressive parameters. These estimators are then used to define feasible generalized least squares (FGLS) procedures for the regression parameters. Given N and T \(\longrightarrow \infty \) (jointly), this study provides formal large sample results on the consistency of the proposed GM procedures, as well as the consistency and asymptotic normality of the proposed feasible generalized least squares (FGLS). It is proved that FGLS is more efficient than CCEP. The small sample properties of the various estimators are investigated by Monte Carlo experiments, which confirmed the theoretical conclusions. Results demonstrate that the popular spatial correlation analysis used in previous empirical literature may be misleading because it neglects common factors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. See Anselin (1988, pp 150–154).

  2. \([\hbox {var}(\hat{{\beta }}_{GLS} )]^{-1}-[\hbox {var}(\hat{{\beta }}^{\# })]^{-1}=(X^{*\prime }\Omega _e ^{-1/2})[I_{NT} -\Omega _e ^{1/2}X^{*}(X^{*\prime }\Omega _e ^{-1}X^{*})^{-1}X^{*\prime }\Omega _e ^{1/2}](\Omega _e ^{-1/2}X^{*})\). Obviously, the middle term \([I_{NT} -\Omega _e ^{1/2}X^{*}(X^{*\prime }\Omega _e ^{-1}X^{*})^{-1}X^{*\prime }\Omega _e ^{1/2}]\) is symmetric and idempotent. Therefore both \([\hbox {var}(\hat{{\beta }}_{GLS} )]^{-1}-[\hbox {var}(\hat{{\beta }}^{\# })]^{-1}\) and \(\hbox {var}(\hat{{\beta }}^{\# })-\hbox {var}(\hat{{\beta }}_{GLS} )\) are semi-positive matrixes, which proves efficiency of the GLS estimator.

  3. We also consider the calculation method of bias and RMSE proposed by Kapoor et al. (2007), that is, RMSE is based on quantiles rather than moments, whereas bias is based on the difference between the true value and median rather than the mean. All results are similar.

  4. The Lemma 2 is readily verified. Details are seen in Kelejian and Prucha (1999).

  5. Let \(C=AZ\), then \(\left| {C_{ij} } \right| =\left| {\sum \nolimits _{k=1}^N {A_{ik} Z_{kj} } } \right| \le \sum \nolimits _{k=1}^N {\left| {A_{ik} Z_{kj} } \right| } =O_p (1)\sum \nolimits _{k=1}^N {\left| {A_{ik} } \right| } \le O_p (1)K=O_p (1)\).

  6. See Schmidt (1976, P71) and Kapoor et al. (2007) for details.

References

  • Anselin, L. (1988). Spatial econometrics: Methods and models. Boston: Kluwer Academic Publishers.

    Book  Google Scholar 

  • Bai, J., & Ng, S. (2002). Determining the number of factors in approximate factor models. Econometrica, 70, 161–221.

    Article  Google Scholar 

  • Bai, J. (2009). Panel data models with interactive fixed effects. Econometrica, 77(4), 1229–1279.

    Article  Google Scholar 

  • Bailey N., Holly, S. & Pesaran M. H. (2013). A two stage approach to spatiotemporal analysis with strong and weak cross-sectional dependence. CESifo Working Paper Series 4592.

  • Bailey N., Kapetanios G., & Pesaran M. H. (2012). Exponents of cross-sectional dependence: Estimation and inference. CESifo Working Paper No. 3722, revised July 2013.

  • Baltagi, B. H., Song, S. H., & Koh, W. (2003). Testing panel data regression models with spatial error correlation. Journal of Econometrics, 117, 123–150.

    Article  Google Scholar 

  • Chudik A., & Pesaran M. H. (2013a). Common correlated effects estimation of heterogeneous dynamic panel data models with weakly exogenous regressors, CESifo Working Paper Series 4232.

  • Chudik A. & Pesaran M. H. (2013b). Large panel data models with cross-sectional dependence: A survey, CESifo Working Paper Series 4371.

  • Chudik, A., Pesaran, M. H., & Tosetti, E. (2011). Weak and strong cross section dependence and estimation of large panels. Econometrics Journal, 14, C45–C90.

    Article  Google Scholar 

  • Coakley J., Fuertes A., & Smith R. (2002). A principal components approach to cross-section dependence in panels. Working Paper.

  • Holly, S., Pesaran, M. H., & Yamagata, T. (2010). A spatio-temporal model of house prices in the USA. Journal of Econometrics, 158(1), 160–173.

    Article  Google Scholar 

  • Jennrich, R. (1969). Asymptotic properties of non-linear least squares estimators. The Annals of Mathematical Statistics, 40, 633–643.

    Article  Google Scholar 

  • Kapetanios, G., Pesaran, M. H., & Yagamata, T. (2011). Panels with nonstationary multifactor error structures. Journal of Econometrics, 160, 326–348.

    Article  Google Scholar 

  • Kapoor, M., Kelejian, H. H., & Prucha, I. (2007). Panel data models with spatially correlated error components. Journal of Econometrics, 140, 97–130.

    Article  Google Scholar 

  • Kelejian, H. H., & Prucha, I. R. (1999). A generalized moments estimator for the autoregressive parameter in a spatial model. International Economic Review, 40, 509–533.

    Article  Google Scholar 

  • Kelejian, H. H., & Prucha, I. R. (2001). On the asymptotic distribution of the Moran’I test statistic with applications. Journal of Econometrics, 104, 219–257.

    Article  Google Scholar 

  • Lee, L. F., & Yu, J. H. (2010). Estimation of spatial autoregressive panel data models with fixed effects. Journal of Econometrics, 154(2), 165–185.

    Article  Google Scholar 

  • Pesaran, M. H., & Tosetti, E. (2011). Large panels with common factors and spatial correlation. Journal of Econometrics, 161(2), 182–202.

    Article  Google Scholar 

  • Pesaran, M. H. (2006). Estimation and inference in large heterogeneous panels with a multifactor error structure. Econometrica, 74, 967–1012.

    Article  Google Scholar 

  • Phillips, P. C. B., & Sul, D. (2003). Dynamic panel estimation and homogeneity testing under cross section dependence. Econometrics Journal, 6, 217–259.

    Article  Google Scholar 

  • Pötscher, B. M., & Prucha, I. R. (1997). Dynamic nonlinear econometric models, asymptotic theory. New York: Springer.

    Book  Google Scholar 

  • Schmidt, P. (1976). Econometrics. New York: Marcel Dekker.

    Google Scholar 

Download references

Acknowledgments

The author acknowledges support from Youth Foundation of Guangdong Academy of Social Sciences under Grant No. 2013G0147 and Theory Group Plan of Guangdong under Grant No. WT1409.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. B. Qian.

Appendix

Appendix

1.1 Proof of Theorem 1

1.1.1 Statement 1: Some Statements and Proofs

First, we provide the properties (35)–(51), lemmata 1–4, and their proof, which are needed for the proof of Theorem 1.

Rewriting (2) and \(\overline{{H}}\) defined in (9) respectively as

$$\begin{aligned} X_i&= G\Pi _i +V_i ,\end{aligned}$$
(35)
$$\begin{aligned} \overline{{H}}&= G\overline{{J}}+\overline{{U}}^{*} , \end{aligned}$$
(36)

where \(G=(D,F),\,\Pi _i =({A}_i^{\prime } ,{\Gamma }_i^{\prime } )^{\prime },\,V_i =(v_{i1} ,\ldots ,v_{iT})^{\prime }\),

$$\begin{aligned}&\overline{{J}}=\left( {{\begin{array}{ll} {I_n }&{} {\overline{{B}}} \\ 0&{} {\overline{{C}}} \\ \end{array} }} \right) ,\nonumber \\&\overline{{U}}^{*}=(0,\overline{{U}}), \, \overline{{U}}=(\overline{{u}}_1 ,\ldots ,\overline{{u}}_T )^{\prime }, \, \overline{{B}}=\frac{1}{N}\sum \limits _{i=1}^N {B_i } , \, \overline{{C}}=\frac{1}{N}\sum \limits _{i=1}^N {C_i } . \end{aligned}$$
(37)

Introducing (38)–(48) proves to be helpful, and these equations are used during the proof. Given Assumptions 1 and 3, we obtain the following:

$$\begin{aligned} \overline{{u}}_t&= O_p \left( \frac{1}{\sqrt{N}}\right) ,\end{aligned}$$
(38)
$$\begin{aligned} \frac{{\overline{{U}}}^{\prime }\overline{{U}}}{T}&= O_p \left( \frac{1}{N}\right) ,\end{aligned}$$
(39)
$$\begin{aligned} \frac{{F}^{\prime }\overline{{U}}}{T}&= O_p \left( \frac{1}{\sqrt{NT}}\right) , \frac{{D}^{\prime }\overline{{U}}}{T}=O_p \left( \frac{1}{\sqrt{NT}}\right) ,\end{aligned}$$
(40)
$$\begin{aligned} \frac{{e}_i^{\prime } \overline{{U}}}{T}&= O_p \left( \frac{1}{\sqrt{NT}}\right) +O_p \left( \frac{1}{N}\right) ,\end{aligned}$$
(41)
$$\begin{aligned} \frac{{\overline{{H}}}^{\prime }\overline{{H}}}{T}&= {\overline{{J}}}^{\prime }\frac{{G}^{\prime }G}{T}\overline{{J}}+O_p \left( \frac{1}{\sqrt{NT}}\right) +O_p \left( \frac{1}{N}\right) ,\end{aligned}$$
(42)
$$\begin{aligned} \frac{{\overline{{H}}}^{\prime }F}{T}&= {\overline{{J}}}^{\prime }\frac{{G}^{\prime }F}{T}+O_p \left( \frac{1}{\sqrt{NT}}\right) ,\end{aligned}$$
(43)
$$\begin{aligned} \frac{{X}_i^{\prime } M_h X_j }{T}&= \frac{{X}_i^{\prime } M_g X_j }{T}+O_p \left( \frac{1}{\sqrt{NT}}\right) +O_p \left( \frac{1}{N}\right) ,\end{aligned}$$
(44)
$$\begin{aligned} \frac{{X}_i^{\prime } M_h F}{T}&= O_p \left( \frac{1}{\sqrt{NT}}\right) +O_p (\frac{1}{N}) ,\end{aligned}$$
(45)
$$\begin{aligned} \frac{{X}_i^{\prime } M_h e_i }{T}&= \frac{{X}_i^{\prime } M_g e_i }{T}+O_p \left( \frac{1}{N}\right) +O_p \left( \frac{1}{\sqrt{NT}}\right) ,\end{aligned}$$
(46)
$$\begin{aligned} \frac{{e}_i^{\prime } M_h e_j }{T}&= \frac{{e}_i^{\prime } M_g e_j }{T}+O_p \left( \frac{1}{N}\right) +O_p \left( \frac{1}{\sqrt{NT}}\right) ,\end{aligned}$$
(47)
$$\begin{aligned} M_h F&= o_p (1) , \end{aligned}$$
(48)

where (38)–(46) have been proven by Pesaran (2006) and Pesaran and Tosetti (2011); thus, the proof is omitted here. (47) and (48) are considered as follows:

From (9), we obtain the following:

$$\begin{aligned}&\frac{{e}_i^{\prime } M_h e_j }{T}=\frac{{e}_i^{\prime } e_j }{T}+\frac{{e}_i^{\prime } \overline{{H}}}{T}\left( \frac{{\overline{{H}}}^{\prime }\overline{{H}}}{T}\right) ^{-1}\frac{{\overline{{H}}}^{\prime }e_j }{T} ,\end{aligned}$$
(49)
$$\begin{aligned}&M_h F=F-\overline{{H}}\left( \frac{{\overline{{H}}}^{\prime }\overline{{H}}}{T}\right) ^{-1}\frac{{\overline{{H}}}^{\prime }F}{T}. \end{aligned}$$
(50)

By using (36), (41), and (42) in (49), establishing (47) becomes easy. By using (36), (42), and (43) in (50) and (38), we obtain the following:

$$\begin{aligned} M_h F=M_g F+O_p (1/\sqrt{N}) , \end{aligned}$$
(51)

Given \(F\subset G,\,M_g F=0\), we have \(M_h F=O_p (1/\sqrt{N})=o_p (1)\).

Lemma 1

Central limit theorem

Let \(\{v_{i,NT} ,1\le i\le NT,N,T\ge 1\}\) be a triangular array of random variables that are independent and identically distributed triangular random sequence with \(E(v_{i,NT} )=0\), \(E(v_{i,NT}^2 )=\sigma ^{2}\), \(\sigma ^{2}<\infty \). Let \(\{Z_{ij,NT} ,1\le i\le NT,N,T\ge 1\}\) be a randomly bounded triangular array, where \(j=1,\ldots ,K\). Given that \(V_{NT} =(v_{i,NT} )\) and \(Z_{NT} =(z_{i,NT} )\), where \(V_{NT} \) and \(Z_{NT} \) are the \(NT\times 1\) and \(NT\times K\) random matrixes respectively, \((NT)^{-1/2}Z_N {\prime }V_N \mathop {\longrightarrow }^{d}N(0,\sigma ^{2}Q)\) if \(\mathop {p\lim }\limits _{(N,T)\longrightarrow \infty } (NT)^{-1}Z_{NT} {\prime }Z_{NT} =Q\) and \(Q\) is a finite positive definite matrix.

Lemma 2 Footnote 4 Let R be a (sequence of) N \(\times \) N matrixes whose row and column sums are bounded uniformly in absolute value, and let S be a k \(\times \) k matrix (with k \(\ge \) 1 fixed). The row and column sums of \(S\otimes R\) are then bounded uniformly in absolute value.

Lemma 3

If A and B are (sequences of) kN \(\times \) kN matrixes (with k \(\ge \) 1 fixed), whose row and column sums are bounded uniformly in absolute value, so are the row and column sums of A \(\times \) B and A + B. If Z is a (sequence of) kN \(\times \) p matrixes whose elements are uniformly bounded in absolute value, so are the elements of AZ and \((kN)^{-1}{Z}^{\prime }AZ\).

Lemma 4

If A is N \(\times \) N matrix whose row and column sums are bounded uniformly in absolute value and Z is the finite random matrix of N \(\times \) p, then the elements in matrix \(AZ\) Footnote 5 and \((N)^{-1}{Z}^{\prime }AZ\) are bounded in probability.

1.1.2 Part (a) of Theorem 1

First, we observe the following:

$$\begin{aligned} (NT)^{1/2}(\hat{{\beta }}_{GLS} -\beta )&= [(NT)^{-1}X^{*\prime }\Omega _e ^{-1}X^{*}]^{-1}(NT)^{-1/2}X^{*\prime }\Omega _e ^{-1}e,\nonumber \\&= Q_{xx} ^{-1}(NT)^{-1/2}X^{*\prime }\Omega _e ^{-1}e, \end{aligned}$$
(52)

where \(\hat{{\beta }}_{GLS} \) is defined in (24).

Recalling from (15) that \(\Omega _e ^{-1}=\sigma _\varepsilon ^{-2} ({P}^{\prime }P)\otimes I_T \), then

$$\begin{aligned} (NT)^{-1/2}X^{*\prime }\Omega _e ^{-1}e=(NT)^{-1/2}\sigma _\varepsilon ^{-2} [(P\otimes I_T )X^{*}]^{\prime }(P\otimes I_T )e \end{aligned}$$

Given Assumptions 1–3 and 5-b, Lemma 4 shows that \((P\otimes I_T )X^{*}\) and \((P\otimes I_T )e\) satisfy the requirements on \(v_{i,NT} \) and \(Z_{ij,NT} \) in Lemma 1. Furthermore,

$$\begin{aligned} (NT)^{-1/2}X^{*\prime }\Omega _e ^{-1}e\mathop {\longrightarrow }^{d}N(0,\Pi ) . \end{aligned}$$
(53)

It follows from (52) and (53) that \((NT)^{-1/2}(\hat{{\beta }}_{GLS} -\beta )\mathop {\longrightarrow }^{d}N[0,\Pi ^{-1}]\), which established part(a) of Theorem 1.

1.1.3 Part (b) of Theorem 1

To prove this part, it suffices to show thatFootnote 6

$$\begin{aligned} (NT)^{-1}[(\tilde{X}^{*\prime }\tilde{\Omega }_e ^{-1}\tilde{X}^{*})-X^{*\prime }\Omega _e ^{-1}X^{*}]\mathop {\longrightarrow }^{p}0 , \end{aligned}$$
(54)

and

$$\begin{aligned} (NT)^{-1/2}[(\tilde{X}^{*\prime }\tilde{\Omega }_e ^{-1}\tilde{e})-X^{*\prime }\Omega _e ^{-1}e]\mathop {\longrightarrow }^{p}0 , \end{aligned}$$
(55)

Recall from (22) and (24) that \(X^{*}=\overline{{M}}_g X=(I_N \otimes M_g )X\), \(\tilde{X}^{*}=\overline{{M}}_h X=(I_N \otimes M_h )X\). In light of (15) and (25), it is known that \(\Omega _e =({P}^{\prime }P)^{-1}\otimes I_T,\,\tilde{\Omega }_e =({\tilde{P}}^{\prime }\tilde{P})^{-1}\otimes I_T ,\,P=I_N -\lambda W,\, \tilde{P}=I_N -\tilde{\lambda }W\).

We first demonstrate (54) as follows:

$$\begin{aligned}&(NT)^{-1}[(\tilde{X}^{*\prime }\tilde{\Omega }_e ^{-1}\tilde{X}^{*})-X^{*\prime }\Omega _e ^{-1}X^{*}]\nonumber \\&\quad =(NT)^{-1}({X}^{\prime }\overline{{M}}_h \tilde{\Omega }_e ^{-1}\overline{{M}}_h X-{X}^{\prime }\overline{{M}}_g \Omega _e ^{-1}\overline{{M}}_g X)\nonumber \\&\quad =(NT)^{-1}{X}^{\prime }[({\tilde{P}}^{\prime }\tilde{P})\otimes M_h -({P}^{\prime }P)\otimes M_g ]X\nonumber \\&\quad =(NT)^{-1}{X}^{\prime }[({\tilde{P}}^{\prime }\tilde{P}-P^{\prime }P)\otimes M_h ]X\nonumber \\&\qquad +\,(NT)^{-1}{X}^{\prime }[({P}^{\prime }P)\otimes (M_h -M_g )]X . \end{aligned}$$
(56)

Note that

$$\begin{aligned} {\tilde{P}}^{\prime }\tilde{P}-{P}^{\prime }P=(\lambda -\tilde{\lambda })({W}^{\prime }+W)+(\lambda ^{2}-\tilde{\lambda }^{2})({W}^{\prime }W) . \end{aligned}$$
(57)

Substituting (57) into the first part of (56) shows that

$$\begin{aligned} (NT)^{-1}{X}^{\prime }[M_h \otimes ({\tilde{P}}^{\prime }\tilde{P}-{P}^{\prime }P)]X&=(\lambda -\tilde{\lambda })(NT)^{-1}{X}^{\prime }[(W^{\prime }+W)\otimes M_h ]X\\&\qquad +\,(\lambda ^{2}-\tilde{\lambda }^{2})(NT)^{-1}{X}^{\prime }[({W}^{\prime }W)\otimes M_h ]X\\&\quad =(\lambda -\tilde{\lambda })(NT)^{-1}\tilde{X}^{*\prime }[({W}^{\prime }+W)\otimes I_T ]\tilde{X}^{*}\\&\qquad +\,(\lambda ^{2}-\tilde{\lambda }^{2})(NT)^{-1}\tilde{X}^{*\prime }[({W}^{\prime }W)\otimes I_T ]\tilde{X}^{*} . \end{aligned}$$

Given Assumption 5-b, \(\tilde{X}^{*}=O_p (1)\). In light of Lemma 3, the row and column sums of matrixes \(({W}^{\prime }+W)\otimes I_T \) and \((W^{\prime }W)\otimes I_N \) are bounded uniformly in absolute value.

Lemma 4 indicates that \((NT)^{-1}\tilde{X}^{*\prime }[({W}^{\prime }+W)\otimes I_T ]\tilde{X}^{*}=O_p (1)\), and \((NT)^{-1}\tilde{X}^{*\prime }[({W}^{\prime }W)\otimes I_N ]\tilde{X}^{*}=O_p (1)\). Therefore,

$$\begin{aligned} (NT)^{-1}{X}^{\prime }[({\tilde{P}}^{\prime }\tilde{P}-{P}^{\prime }P)\otimes M_h ]X\mathop {\longrightarrow }^{p}0 . \end{aligned}$$
(58)

In the second part of (56), let \(B={P}^{\prime }P\) by using the multiplication of the partitioned matrix. We then obtain the following:

$$\begin{aligned}&(NT)^{-1}{X}^{\prime }[({P}^{\prime }P)\otimes (M_h -M_g )]X\nonumber \\&\quad =(NT)^{-1}{X}^{\prime }[B\otimes (M_h -M_g )]X\nonumber \\&\quad =(N)^{-1}\sum \limits _{i=1}^N {\sum \limits _{j=1}^N {\Biggl [B_{ij} \Bigg (} } \frac{{X}_i^{\prime } M_h X_j }{T}-\frac{{X}_i^{\prime } M_g X_j }{T}\Bigg )\Biggr ] , \end{aligned}$$
(59)

where \(X_i =(x_{i1} ,\ldots x_{it} ,\ldots ,x_{iT})^{\prime }\), \(B_{ij} \) is the (i,j)-th element of matrix \(B\).

Substituting (44) into (59), and in light of Lemma 3 that the row and column sums of matrix B are bounded uniformly in absolute value, it is readily seen that

$$\begin{aligned} (NT)^{-1}{X}^{\prime }[({P}^{\prime }P)\otimes (M_h -M_g )]X\mathop {\longrightarrow }^{p}0 , \end{aligned}$$
(60)

Combining (56), (58) and (60) establishes (54).

We next demonstrate (55). By using (12), we obtain the following:

$$\begin{aligned} y-X\beta =(I_N \otimes D)\alpha +(I_N \otimes F)\gamma +e. \end{aligned}$$
(61)

Thus, (55) can be written as

$$\begin{aligned}&(NT)^{-1}[(\tilde{X}^{*\prime }\tilde{\Omega }_e ^{-1}\tilde{e})-X^{*\prime }\Omega _e ^{-1}e]\nonumber \\&\quad =(NT)^{-1/2}{X}^{\prime }[({\tilde{P}}^{\prime }\tilde{P})\otimes I_T ](I_N \otimes M_h )(I_N \otimes F)\gamma \nonumber \\&\qquad +\,(NT)^{-1/2}{X}^{\prime }\{[({\tilde{P}}^{\prime }\tilde{P})\otimes M_h ]-({P}^{\prime }P)\otimes M_g ]e . \end{aligned}$$
(62)

The first part of (62) can be written as

$$\begin{aligned}&(NT)^{-1/2}{X}^{\prime }[({\tilde{P}}^{\prime }\tilde{P})\otimes I_T ](I_N \otimes M_h )(I_N \otimes F)\gamma \nonumber \\&\quad =(NT)^{-1/2}{X}^{\prime }[({P}^{\prime }P)\otimes I_T ](I_N \otimes M_h )(I_N \otimes F)\gamma \nonumber \\&\qquad +\,(NT)^{-1/2}{X}^{\prime }[({\tilde{P}}^{\prime }\tilde{P}-{P}^{\prime }P)\otimes I_T ](I_N \otimes M_h )(I_N \otimes F)\gamma . \end{aligned}$$
(63)

Let \(\Theta ={P}^{\prime }P\). The first part in (63) can be written as

$$\begin{aligned} (NT)^{-1/2}{X}^{\prime }[\Theta \otimes I_T ](I_N \otimes M_h )(I_N \otimes F)\gamma =(NT)^{-1/2}{X}^{\prime }(I_N \otimes M_h )(\Theta \otimes F)\gamma \end{aligned}$$

By definition, \(X=({X}_1^{\prime } ,\ldots ,{X}_N^{\prime } )^{\prime }\). By using multiplication of the partitioned matrix, it yields the following:

$$\begin{aligned}&(NT)^{-1/2}{X}^{\prime }(I_N \otimes M_h )(\Theta \otimes F)\gamma \nonumber \\&\quad =(NT)^{-1/2}({X}^{\prime }_1 M_h ,\ldots ,{X}^{\prime }_N M_h )(\Theta \otimes F)\gamma \nonumber \\&\quad =(NT)^{-1/2}\left( \sum \limits _{i=1}^N {\Theta _{i1} {X}_i^{\prime } M_h } F,\ldots ,\sum \limits _{i=1}^N {\Theta _{iN} {X}_i^{\prime } M_h } F\right) \gamma \nonumber \\&\quad =(NT)^{-1/2}\sum \limits _{i=1}^N {\sum \limits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F\gamma _j } } . \end{aligned}$$
(64)

By substituting (6) into (64), we obtain

$$\begin{aligned} (NT)^{-1/2}\sum \limits _{i=1}^N {\sum \limits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F\eta } } +(NT)^{-1/2}\sum \limits _{i=1}^N {\sum \limits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F\eta _j } } . \end{aligned}$$

Noting \((NT)^{-1/2}\sum \nolimits _{i=1}^N {\sum \nolimits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F\eta } } =(NT)^{-1/2}\sum \nolimits _{i=1}^N {{X}_i^{\prime } M_h \sum \nolimits _{j=1}^N {\Theta _{ij} } }\) \( F\eta \).

From Assumption 4, we obtain \(\sum \nolimits _{j=1}^N {\Theta _{ij}} =\overline{{\Theta }}+\pi _i \). By using \({M}^{\prime }_h \overline{{X}}=0\),

$$\begin{aligned} (NT)^{-1/2}\sum \limits _{i=1}^N {\sum \limits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F\eta } } =\left( \frac{N}{T}\right) ^{-1/2}\sum \limits _{i=1}^N {\left( \frac{{X}_i^{\prime } M_h F}{T}\right) \pi _i } \eta \end{aligned}$$

Remembering \(\pi _i =O\left( \frac{1}{\sqrt{N}}\right) \) and \({X}_i^{\prime } M_h F/T=O_p [(NT)^{-1/2}]+O_p (N^{-1})\), thus

$$\begin{aligned} (NT)^{-1/2}\sum \limits _{i=1}^N {\sum \limits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F\eta } } =o_p (1) . \end{aligned}$$
(65)

Considering \((NT)^{-1/2}\sum \nolimits _{i=1}^N {\sum \nolimits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F\eta _j } } =(T/N)^{1/2}\sum \nolimits _{j=1}^N \eta _j \sum \nolimits _{i=1}^N{\Theta _{ij} ({X}_i^{\prime } M_h F/T)} \).

In light of (45), \({X}_i^{\prime } M_h F/T=O_p [(NT)^{-1/2}]+O_p (N^{-1})\). Given Assumption 3 \((N)^{-1/2}\sum \nolimits _{j=1}^N {\eta _j } =O_p (1)\) and because\(\left| {\sum \nolimits _{i=1}^N {\Theta _{ij} } } \right| \le K\), we obtain

$$\begin{aligned} (NT)^{-1/2}\sum \limits _{i=1}^N {\sum \limits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F_T \eta _j } } =O_p \big (N^{-1/2}\big )+O_p \big (T^{1/2}/N\big ). \end{aligned}$$

If the condition \(T^{1/2}/N\longrightarrow 0\) is satisfied, we obtain

$$\begin{aligned} (NT)^{-1/2}\sum \limits _{i=1}^N {\sum \limits _{j=1}^N {\Theta _{ij} {X}_i^{\prime } M_h F\eta _j } } \mathop {\longrightarrow }^{p}0 . \end{aligned}$$
(66)

By using (65),

$$\begin{aligned} (NT)^{-1/2}{X}^{\prime }\big [({P}^{\prime }P)\otimes I_T \big ]\big (I_N \otimes M_h \big )F\gamma \mathop {\longrightarrow }^{p}0 . \end{aligned}$$
(67)

The use of \({\tilde{P}}^{\prime }\tilde{P}-{P}^{\prime }P\mathop {\longrightarrow }^{p}0\) establishes the first part of (62), i.e.,

$$\begin{aligned} (NT)^{-1/2}{X}^{\prime }\big [({\tilde{P}}^{\prime }\tilde{P})\otimes I_T \big ]\big (I_N \otimes M_h \big )F\gamma \mathop {\longrightarrow }^{p}0 . \end{aligned}$$
(68)

The second part in (62) can be written as follows:

$$\begin{aligned}&(NT)^{-1/2}{X}^{\prime }\left\{ \Bigg [({\tilde{P}}^{\prime }\tilde{P})\otimes M_h \Bigg ]-({P}^{\prime }P)\otimes M_g \right] e\nonumber \\&\quad =(NT)^{-1/2}{X}^{\prime }\big [({\tilde{P}}^{\prime }\tilde{P}-{P}^{\prime }P)\otimes M_h \big ]e\nonumber \\&\qquad +\,(NT)^{-1/2}{X}^{\prime }\big [({P}^{\prime }P)\otimes (M_h -M_g )\big ]e . \end{aligned}$$
(69)

By using (57), the first part of (69) can be rewritten as follows:

$$\begin{aligned} (NT)^{-1/2}{X}^{\prime }\Bigg [\big ({\tilde{P}}^{\prime }\tilde{P}-{P}^{\prime }P\big )\otimes M_h \Bigg ]e&=(\lambda -\tilde{\lambda })(NT)^{-1/2}{X}^{\prime }\big [(W^{\prime }+W)\otimes M_h \big ]e\\&\quad +\,(\lambda ^{2}-\tilde{\lambda }^{2})(NT)^{-1/2}X^{\prime }\big [(W^{\prime }W)\otimes M_h \big ]e. \end{aligned}$$

Let \(\Delta =(NT)^{-1/2}{X}^{\prime }[({W}^{\prime }+W)\otimes M_h ]e\), thus verifying \(E\Delta =0\) becomes easy, and

$$\begin{aligned} \hbox {var}(\Delta )&= \frac{1}{NT}E\left\{ {X}^{\prime }\bigg [({W}^{\prime }+W)\otimes M_h \bigg ]\Omega _e \bigg [({W}^{\prime }+W)\otimes M_h \bigg ]X\right\} \\&= \frac{1}{NT}E\bigg [(\overline{{M}}_h X{)}^{\prime }\Upsilon (\overline{{M}}_h X)\bigg ], \end{aligned}$$

where \(\Upsilon =[({W}^{\prime }+W)({P}^{\prime }P)^{-1}({W}^{\prime }+W)]\otimes I_T \).

In light of Lemmata 2 and 3, the row and column sums of matrix \(\Upsilon \) are bounded uniformly in absolute value, which in conjunction with Assumption 5-b and Lemma 4 yields \(\hbox {var}(\Delta )=O(1)\). Therefore, \(\Delta =O_P (1)\).

Thus,

$$\begin{aligned} (\lambda -\tilde{\lambda })(NT)^{-1/2}{X}^{\prime }\Big [(W{\prime }+W)\otimes M_h \Big ]e\mathop {\longrightarrow }^{p}0 . \end{aligned}$$
(70)

By using similar manipulations, we can also obtain

$$\begin{aligned} (\lambda ^{2}-\tilde{\lambda }^{2})(NT)^{-1/2}{X}^{\prime }\Big [(W{\prime }W)\otimes M_w \Big ]e\mathop {\longrightarrow }^{p}0 . \end{aligned}$$
(71)

Combining (70) and (71) establishes (69), i.e.,

$$\begin{aligned} (NT)^{-1/2}{X}^{\prime }\left\{ \Big [M_h \otimes ({\tilde{P}}^{\prime }\tilde{P})\Big ]-M_g \otimes ({P}^{\prime }P)\right] e\mathop {\longrightarrow }^{p}0 \end{aligned}$$
(72)

Combining (68) and (72) shows (55), and thus establishes the validity of part (b) of Theorem 1.

1.1.4 Part (c) of Theorem 1

Part (c) of the theorem follows immediately from (54) and (55).

1.2 Proof of Eq. (27)

Given the specification in (3) and the definition of \(\varepsilon ^{*}\) in (26), deriving the three moment conditions becomes straightforward by using Assumption 2 and Lemma 2–4:

$$\begin{aligned} E\left[ \frac{1}{N(T-m-n)}\varepsilon ^{*\prime }\varepsilon ^{*}\right]&= \frac{1}{N(T-m-n)}E[{\varepsilon }^{\prime }\overline{{M}}_g \varepsilon ]\\&= \frac{1}{N(T-m-n)}tr[E(\overline{{M}}_g )E(\varepsilon {\varepsilon }^{\prime })]\\&= \frac{\sigma _\varepsilon ^2 }{N(T-m-n)}N\{T-tr[G({G}^{\prime }G)^{-}G]\}\\&= \sigma _\varepsilon ^2 ,\\ E\left[ \frac{1}{N(T-m-n)}\overline{{\varepsilon }}^{*\prime }\overline{{\varepsilon }}^{*}\right]&= \frac{1}{N(T-m-n)}E\{{\varepsilon }^{\prime }[({W}^{\prime }W\otimes M_g ]\varepsilon \}\\&= \frac{1}{N(T-m-n)}tr[E({W}^{\prime }W\otimes M_g )E(\varepsilon {\varepsilon }^{\prime })]\\&= \frac{\sigma _\varepsilon ^2 }{N(T-m-n)}tr({W}^{\prime }W)E[tr(M_g )]\\&= \frac{\sigma _\varepsilon ^2 }{N}tr(W^{\prime }W),\\ E\left[ \frac{1}{N(T-m-n)}\varepsilon ^{*\prime }\overline{{\varepsilon }}^{*}\right]&= \frac{1}{N(T-m-n)}E[{\varepsilon }^{\prime }(W\otimes M_g )\varepsilon ]\\&= \frac{1}{N(T-m-n)}tr[E(W\otimes M_g )E(\varepsilon {\varepsilon }^{\prime })]\\&= \frac{\sigma _\varepsilon ^2 }{N(T-m-n)}tr(W)E[tr(G)]\\&= 0. \end{aligned}$$

The moment equations given in (27) follow immediately from the above derivation.

1.3 Proof of Theorem 2

1.3.1 Statement 2: Some Statements and Proofs

We now provide lemmata 5, 6 and their proof, which are needed for the proof of Theorem 2.

Lemma 5

Let \(Q^{*}\) and \(q^{*}\) be identical to \(\Phi \)and \(\phi \) in (29), except that the expectation operator is dropped. Suppose Assumptions 2 and 4 hold, then

$$\begin{aligned} Q^{*}-\Phi \mathop {\longrightarrow }^{p}0, \, q^{*}-\phi \mathop {\longrightarrow }^{p}0,\hbox { as } N(T-m-n)\longrightarrow \infty \end{aligned}$$

Proof

Note from (14) and (26) that \(e^{*}=(P\otimes I_T )\varepsilon ^{*},\,\overline{{e}}^{*}=(WP\otimes I_T )\varepsilon ^{*}\) and \(\overline{{\overline{{e}}}}^{*}=(W^{2}P\otimes I_T )\varepsilon ^{*}\).

Recalling from (29), it is not difficult to verify that the respective quadratic forms in \(e^{*}\), \(\overline{{e}}^{*}\) and \(\overline{{\overline{{e}}}}^{*}\) involved in \(Q^{*}\) and \(q^{*}\) are, apart from constants, expressible as

$$\begin{aligned} \psi _1&= \frac{1}{N(T-m-n)}e^{*\prime }e^{*}=\frac{1}{N(T-m-n)}e^{*\prime }(Q_1 \otimes I_T )e^{*}\\&= \frac{1}{N(T-m-n)}\varepsilon ^{\prime }\Big [({P}^{\prime }Q_1 P)\otimes M_g \Big ]\varepsilon \\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }[C_1 \otimes M_g ]\varepsilon , \quad Q_1 =I_N , \quad C_1 ={P}^{\prime }Q_1 P,\\ \psi _2&= \frac{1}{N(T-m-n)}e^{*\prime }\overline{{e}}^{*}=\frac{1}{N(T-m-n)}e^{*\prime }(Q_2 \otimes I_T )e^{*}\\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }\Big [({P}^{\prime }Q_2 P)\otimes M_g \Big ]\varepsilon \\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }[C_2 \otimes M_g ]\varepsilon , \,Q_2 =W, \,C_2 ={P}^{\prime }Q_2 P,\\ \psi _3&= \frac{1}{N(T-m-n)}\overline{{e}}^{*\prime }\overline{{e}}^{*}=\frac{1}{N(T-m-n)}e^{*\prime }(Q_3 \otimes I_T )e^{*}\\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }\Big [({P}^{\prime }Q_3 P)\otimes M_g \Big ]\varepsilon \\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }[C_3 \otimes M_g ]\varepsilon , \,Q_3 ={W}^{\prime }W, \,C_3 ={P}^{\prime }Q_3 P,\\ \psi _4&= \frac{1}{N(T-m-n)}\overline{{\overline{{e}}}}^{*\prime }\overline{{e}}^{*}=\frac{1}{N(T-m-n)}e^{*\prime }(Q_4 \otimes I_T )e^{*}\\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }\Big [({P}^{\prime }Q_4 P)\otimes M_g \Big ]\varepsilon \\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }[C_4 \otimes M_g ]\varepsilon ,\, Q_4 =({W}^{\prime })^{2}W,\, C_4 ={P}^{\prime }Q_4 P,\\ \psi _5&= \frac{1}{N(T-m-n)}\overline{{\overline{{e}}}}^{*\prime }\overline{{\overline{{e}}}}^{*}=\frac{1}{N(T-m-n)}e^{*\prime }(Q_5 \otimes I_T )e^{*}\\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }\Big [({P}^{\prime }Q_5 P)\otimes M_g \Big ]\varepsilon \\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }[C_5 \otimes M_g ]\varepsilon ,\, Q_5 =({W}^{\prime })^{2}W^{2},\, C_5 ={P}^{\prime }Q_5 P,\\ \psi _6&= \frac{1}{N(T-m-n)}e^{*\prime }\overline{{\overline{{e}}}}^{*}=\frac{1}{N(T-m-n)}e^{*\prime }(Q_6 \otimes I_T )e^{*}\\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }\Big [({P}^{\prime }Q_6 P)\otimes M_g \Big ]\varepsilon \\&= \frac{1}{N(T-m-n)}{\varepsilon }^{\prime }[C_6 \otimes M_g ]\varepsilon ,\, Q_6 =W^{2},\, C_6 ={P}^{\prime }Q_6 P. \end{aligned}$$

The above Equations can be summarized as

$$\begin{aligned} \psi _i =\frac{1}{N(T-m-n)}{\varepsilon }^{\prime }[C_i \otimes M_g ]\varepsilon ,\quad \mathrm { i }= 1,{\ldots },6. \end{aligned}$$
(73)

In light of Assumptions 2 and 4 and Lemma 3, the row and column sums of W and P and those of matrixes \(\hbox {C}_\mathrm{i} (\hbox {i} = 1,{\ldots },6)\) are bounded uniformly in absolute value. Thus, we obtain the following:

$$\begin{aligned} E(\psi _i )&= \frac{1}{N(T-m-n)}E|Big[{\varepsilon }^{\prime }(C_i \otimes M_g )\varepsilon \Big ]\nonumber \\&= \frac{1}{N(T-m-n)}tr\left\{ E\Big [(C_i \otimes M_g )\varepsilon {\varepsilon }^{\prime }\Big ]\right\} \nonumber \\&= \frac{\sigma _u^2 }{N(T-m-n)}E\Big [tr(M_g )tr(C_i )\Big ]\nonumber \\&= \frac{\sigma _u^2 }{N}tr(C_i )=O(1) . \end{aligned}$$
(74)

Define \(\psi _i =\frac{1}{N(T-m-n)}[{\varepsilon }^{\prime }R_i \varepsilon ]\), where \(R_i =C_i \otimes M_g \). By using the expression for the variance of quadratic forms given in Kelejian and Prucha (2001), we obtain

$$\begin{aligned}&\hbox {var}(\psi _i )\\&\quad =\frac{1}{2N^{2}(T-m-n)^{2}}E\!\left\{ tr\Big [(R_i +{R}_i^{\prime } )\Omega _\varepsilon \Big ]^{2}+\sum \limits _{j=1}^{NT} {(R_{i,jj} } )^{2}[E(\varepsilon _j^4 )-3\hbox {var}^{2}(\varepsilon _j )]\right\} \\&\quad =\frac{\sigma _\varepsilon ^4 }{2N^{2}(T-m-n)^{2}}E\left\{ tr\Big [(R_i +{R}_i^{\prime } )\Big ]^{2}+\sum \limits _{j=1}^{NT} {(R_{i,jj} } )^{2}\Big [E(\varepsilon _j^4 )-3\hbox {var}^{2}(\varepsilon _j )\Big ]\right\} \end{aligned}$$

In light of Assumption 2, \(E(\varepsilon _j^4 )-3var^{2}(\varepsilon _j )=O(1)\) and \(R_{i,jj} =O_p (1)\). Furthermore, we also note that \(\frac{tr[(R_i )^{2}]}{N(T-m-n)}=\frac{tr(M_g )tr({C}_i^{\prime } C_i )}{N(T-m-n)}=\frac{tr({C}_i^{\prime } C_i )}{N}=O(1)\). Therefore,

$$\begin{aligned} \hbox {var}(\psi _i )=o(1) . \end{aligned}$$
(75)

Combining (74) and (75) shows that

$$\begin{aligned} \psi _i -E(\psi _i )\mathop {\longrightarrow }^{P}0 , \end{aligned}$$
(76)

which establishes Lemma 5.

Lemma 6

Let \(Q^{*}\) and \(q^{*}\) be defined in Lemma 5. Given that Assumptions 1–4 and 5-b hold, and that \(\hat{{\beta }}_{ccep} \) is the consistent estimator of \(\upbeta \),

$$\begin{aligned} Q-Q^{*}\mathop {\longrightarrow }^{p}0, \, q-q^{*}\mathop {\longrightarrow }^{p}0,\hbox { as }(N,T)\mathop {\longrightarrow }^{p}\infty . \end{aligned}$$

Proof

The quadratic forms composing the elements of \(Q^{*}\) and \(q^{*}\) have been collected in (73) and can be seen as the form \((\hbox {i }= 1,{\ldots },6)\):

$$\begin{aligned} \psi _i =\frac{1}{N(T-m-n)}e^{*\prime }(Q_i \otimes I_T )e^{*}=\frac{1}{N(T-m-n)}{e}^{\prime }(Q_i \otimes M_g )e , \end{aligned}$$
(77)

\(\square \)

The quadratic forms composing the elements of \(Q\) and \(q\) defined in (31) are given by

$$\begin{aligned} \frac{1}{N(T-n-k-1)}\tilde{e}^{*\prime }(Q_i \otimes I_T )\tilde{e}^{*},\quad \hbox { for i}=1,{\ldots },6. \end{aligned}$$

Define

$$\begin{aligned} \tilde{\psi }_i =\frac{1}{N(T-m-n)}\tilde{e}^{*\prime }(Q_i \otimes I_T )\tilde{e}^{*} ,\quad \hbox { for i}=1,2{\ldots },6. \end{aligned}$$
(78)

Note that \(\tilde{\psi }_i =\frac{(T-n-k-1)}{(T-m-n)}\frac{1}{N(T-n-k-1)}\tilde{e}^{*\prime }(Q_i \otimes I_T )\tilde{e}^{*}\).

Both the number of the common factors (m + n) and the number of their proxies (k \(+\) 1) are finite. Thus, \(\frac{T-m-n}{T-n-k-1}\longrightarrow 1\) as \(T\longrightarrow \infty \).

Therefore, showing that \(\tilde{\psi }_i -\psi _i \mathop {\longrightarrow }^{p}0\) is sufficient to prove Lemma 5.

$$\begin{aligned} \hbox {Clearly }\tilde{e}^{*}&= \overline{{M}}_h y-\overline{{M}}_h X\hat{{\beta }}_{ccep} =\overline{{M}}_h X\beta -\overline{{M}}_h X\hat{{\beta }}_{ccep} +\overline{{M}}_h (I_N \otimes F)\gamma +\overline{{M}}_h e\nonumber \\&= \overline{{M}}_h X(\beta -\hat{{\beta }}_{ccep} )+(I_N \otimes M_h F)\gamma +\overline{{M}}_h \varepsilon . \end{aligned}$$
(79)

Consider the first term of (79). Given Assumption 5-b, we obtain \(\overline{{M}}_h X=O_p (1)\). In light of \(\hat{{\beta }}_{ccep} -\beta \mathop {\longrightarrow }^{p}0\),

$$\begin{aligned} \overline{{M}}_h X(\beta -\hat{{\beta }}_{ccep} )=o_p (1) . \end{aligned}$$
(80)

Consider the second term in (79), which can be written as

$$\begin{aligned} (I_N \otimes M_h F)\gamma =\Big [(M_h F\gamma _1 )^{\prime },\ldots ,(M_h F\gamma _N )^{\prime }{\Big ]}^{\prime } . \end{aligned}$$
(81)

Note from (48) that \(M_h F=o_p (1)\), thus

$$\begin{aligned} (I_N \otimes M_h F)\gamma =o_p (1) . \end{aligned}$$
(82)

Substituting (80) and (82) into (79) shows that

$$\begin{aligned} \tilde{e}^{*}=\overline{{M}}_h e+o_p (1) . \end{aligned}$$
(83)

Substituting (83) into (78) yields

$$\begin{aligned} \tilde{\psi }_i =\frac{1}{N(T-m-n)}\Big [{e}^{\prime }\overline{{M}}_h +o_p (1)\Big ]\Big [Q_i \otimes I_T \Big ]\Big [\overline{{M}}_h e+o_p (1)\Big ]. \end{aligned}$$

Note that \(\overline{{M}}_h e=O_p (1)\), and in light of Lemmata 2 and 3, the row and column sums of matrix \(Q_i \otimes I_T \) are bounded in absolute value. Therefore,

$$\begin{aligned} \tilde{\psi }_i =\frac{1}{N(T-m-n)}\big (\overline{{M}}_h e{\big )}^{\prime }\big (Q_i \otimes I_T \big )\big (\overline{{M}}_h e\big )+o_p (1). \end{aligned}$$

Given (77) and using the multiplication of the partitioned matrix, we obtain the following:

$$\begin{aligned} \tilde{\psi }_i -\psi _i&= \frac{1}{N(T-m-n)}\Big [{e}^{\prime }(Q_i \otimes M_h )e-{e}^{\prime }(Q_i \otimes M_g )e\Big ]+o_p (1)\nonumber \\&= \frac{1}{N(T-m-n)}\Big [\sum \limits _{k=1}^N {\sum \limits _{j=1}^N {(Q_i )_{kj} ({e}_k^{\prime } } } M_h e_j -{e}_k^{\prime } M_g e_j )\Big ]+o_p (1)\quad \end{aligned}$$
(84)

where \(e_k =(e_{k1} ,\ldots e_{kt} ,\ldots ,e_{kT} )^{\prime },\, {(Q_i)}_{kj} \) is the (k,j)-th element of matrix \(Q_i \).

Substituting (47) into (84) and observing that row and column sums of \(Q_i \) are bounded in absolute value, we establish \(\tilde{\psi }_i -\psi _i \mathop {\longrightarrow }^{p}0\).

Lemma 5 shows that \(Q^{*}-\Phi \mathop {\longrightarrow }^{p}0\), \(q^{*}-\phi \mathop {\longrightarrow }^{p}0\). In conjunction with Lemma 6, Lemma 5 also shows that \(Q-Q^{*}\mathop {\longrightarrow }^{p}0\), \(q-q^{*}\mathop {\longrightarrow }^{p}0\). Therefore,

$$\begin{aligned} Q-\Phi \mathop {\longrightarrow }^{p}0, q-\phi \mathop {\longrightarrow }^{p}0, as (N,T)\mathop {\longrightarrow }^{j}\infty . \end{aligned}$$
(85)

1.3.2 Proof of Uniqueness and Consistency for Theorem 2

Given Lemmata 5 and 6, we are now ready for the final step in proving Theorem 2. For \(\tilde{\lambda }\) and \(\tilde{\sigma }_\varepsilon ^2 \) defined in (32), Lemma 2 in Jennrich (1969) ensures the existence and measurability. We establish consistency through showing that the conditions of Lemma 3.1 in Pötscher and Prucha (1997) are satisfied. For the convenience of discussion, the objective function and the corresponding counterpart are given respectively as

$$\begin{aligned} R({\underline{\theta } })&= \Bigg \{G\Big [{\underline{\lambda } },{\underline{\lambda } }^{2},{\underline{\sigma } }_\varepsilon ^2 {\Big ]}^{\prime }-g{\Bigg \}}^{\prime }\Bigg \{G\Big [\underline{\lambda },{\underline{\lambda } }^{2},{\underline{\sigma } }_\varepsilon ^2 {\Big ]}^{\prime }-g\Bigg \},\\ \overline{{R}}(\underline{\theta })&= \Bigg \{\Phi \Big [{\underline{\lambda } },{\underline{\lambda } }^{2},{\underline{\sigma } }_\varepsilon ^2 {\Big ]}^{\prime }-\phi {\Bigg \}}^{\prime }\Bigg \{\Phi \Big [{\underline{\lambda } },{\underline{\lambda } }^{2},{\underline{\sigma } }_\varepsilon ^2 {\Big ]}^{\prime }-\phi \Bigg \}, \end{aligned}$$

where \({\underline{\theta } }=({\underline{\lambda } },{\underline{\sigma }}_\varepsilon ^2 )\).

The proof includes two steps: first, the uniqueness of the real parameters is proven, and then \(\tilde{\lambda }\)and \(\tilde{\sigma }_\varepsilon ^2 \) defined by (32) are proven as consistent estimators.

Using (29) we have \(\overline{{R}}(\theta )=0\). In light of Assumption 6, we have

$$\begin{aligned} \overline{{R}}(\underline{\theta })-\overline{{R}}(\theta )&= \Big [\lambda -{\underline{\lambda }},\lambda ^{2}-{\underline{\lambda }}^{2},{\underline{\sigma } }_\varepsilon ^2 -\sigma _\varepsilon ^2 \Big ]{\Phi }^{\prime }\Phi \Big [\lambda -{\underline{\lambda } },\lambda ^{2}-{\underline{\lambda } }^{2},{\underline{\sigma }}_\varepsilon ^2 -\sigma _\varepsilon ^2 {\Big ]}^{\prime }\\&\ge \rho _{\min } ({\Phi }^{\prime }\Phi )\Big [\lambda -{\underline{\lambda } },\lambda ^{2}-{\underline{\lambda } }^{2},{\underline{\sigma }}_\varepsilon ^2 -{\sigma } _\varepsilon ^2 \Big ]\Big [\lambda -{\underline{\lambda } },\lambda ^{2}-{\underline{\lambda } }^{2},{\underline{\sigma } }_\varepsilon ^2 -\sigma _\varepsilon ^2 \Big ]\\&= \rho _{\min } ({\Phi }^{\prime }\Phi )\left\| {{\underline{\theta } }-\theta } \right\| ^{2}>0. \end{aligned}$$

Therefore, for any \(\kappa >0,\,\mathop {\inf }\limits _{\{{\underline{\theta } }:\left\| {{\underline{\theta }}-\theta } \right\| \ge \kappa } \overline{{R}}({\underline{\theta } })-\overline{{R}}(\theta )\ge \mathop {\inf }\limits _{\{\underline{\theta }:\left\| {\underline{\theta }-\theta } \right\| \ge \kappa } \rho _{\min } (\Phi {\prime }\Phi )\left\| {\underline{\theta }-\theta } \right\| ^{2}>0\), which proves that the real parameters are identifiably unique. Next, we consider the consistency.

In light of Lemma 3.1 in Pötscher and Prucha (1997), verifying that the following condition is satisfied is sufficient:

$$\begin{aligned} \mathop {\sup }\limits _{\lambda \in [-a,a],\sigma _u^2 \in [0,b_\varepsilon ]} \left| {R(\underline{\theta })-\overline{{R}}(\underline{\theta })} \right| \mathop {\longrightarrow }^{p}0 . \end{aligned}$$
(86)

Let \(F=[G,-g]\) and \(\Lambda =[\Phi ,-\phi ]\), then for \(\lambda \in [-a,a]\) and \(\sigma _\varepsilon ^2 \in [0,b_\varepsilon ]\),

$$\begin{aligned} \left| {R(\underline{\theta })-\overline{{R}}(\underline{\theta })} \right|&= \left| {[\underline{\lambda },\underline{\lambda }^{2},{\underline{\sigma }}_\varepsilon ^2 ,1][{F}^{\prime }F-{\Lambda }^{\prime }\Lambda ][{\underline{\lambda } },{\underline{\lambda } }^{2},{\underline{\sigma }}_\varepsilon ^2 ,1{]}^{\prime }} \right| \\&\le \left\| {{F}^{\prime }F-{\Lambda }^{\prime }\Lambda } \right\| \left\| {{\underline{\lambda } },{\underline{\lambda } }^{2},{\underline{\sigma } }_\varepsilon ^2 ,1} \right\| ^{2}\\&\le \left\| {{F}^{\prime }F-{\Lambda }^{\prime }\Lambda } \right\| [1+a^{2}+a^{4}+b_\varepsilon ^2 ]. \end{aligned}$$

Given (85) we have \(F-\Lambda \mathop {\longrightarrow }^{p}0\). Elements of \(F\) and \(\Lambda \) are both \(O_p (1)\), as seen in Lemmata 4 and 5, and consequently \(\left\| {{F}^{\prime }F-{\Lambda }^{\prime }\Lambda } \right\| \mathop {\longrightarrow }^{p}0\), which yields:

$$\begin{aligned} \left| {R({\underline{\theta } })-\overline{{R}}(\underline{\theta })} \right| \le \left\| {{F}^{\prime }F-{\Lambda }^{\prime }\Lambda } \right\| [1+a^{2}+a^{4}+b_\varepsilon ^2 ]\mathop {\longrightarrow }^{p}0 . \end{aligned}$$
(87)

Given (87), \(\sup \limits _{\rho \in [-a,a],\sigma _u^2 \in [0,b_u ]} \left| {R(\underline{\theta })-\overline{{R}}(\underline{\theta })} \right| \mathop {\longrightarrow }^{p}0\), which establishes the consistency.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qian, J.B. Estimation of Panel Model with Spatial Autoregressive Error and Common Factors. Comput Econ 47, 367–399 (2016). https://doi.org/10.1007/s10614-015-9494-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10614-015-9494-7

Keywords

JEL Classification

Navigation