Abstract
This paper considers comparison problems for dispersion matrices of predictors under two competing linear models with having the same restrictions on their joint unknown parameters. One of the competing model is a constrained linear model (CLM) and the other one is a constrained over-parameterized model (COLM), obtained by adding new regressors to the CLM. After converting explicitly CLM and its COLM to their implicitly constrained forms, analytical expressions and properties of the best linear unbiased predictors (BLUPs) are given via using some quadratic matrix optimization methods under the models. In particular, the authors provide some equalities and inequalities for dispersion matrices of BLUPs under the models by using some formulas of ranks and inertias of block matrices. Comparison results on the dispersion matrices are also derived for special cases.
Similar content being viewed by others
References
Alalouf IS, Styan GPH (1979) Characterizations of estimability in the general linear model. Ann Stat 7(1):194–200
Amemiya T (1985) Advanced econometrics. Basil Blackwell, Oxford
Baksalary JK (1984) A study of the equivalence between a Gauss-Markoff model and its augmentation by nuisance parameters. Ser Stat 15(1):3–35
Baksalary JK, Pordzik PR (1989) Inverse-partitioned-matrix method for the general Gauss-Markov model with linear restrictions. J Stat Plan Inference 23(2):133–143
Baksalary JK, Pordzik PR (1990) A note on comparing the unrestricted and restricted least-squares estimators. Linear Algebra Appl 127:371–378
Bhimasankaram P, Jammalamadaka SR (1994) Updates of statistics in a general linear model: A statistical interpretation and applications. Commun Stat Simul Comput 23(3):789–801
Büyükkaya ME (2022) Characterizing relationships between BLUPs under linear mixed model and some associated reduced models. Commun Stat Simul Comput 2022:1–14. https://doi.org/10.1080/03610918.2022.2115071
Chipman JS, Rao MM (1964) The treatment of linear restrictions in regression analysis. Econometrica 32(1/2):198–209
Dent WT (1980) On restricted estimation in linear models. J Econ 12(1):49–58
Dong B, Guo W, Tian Y (2014) On relations between BLUEs under two transformed linear models. J Multivar Anal 131:279–292
Drygas H (1970) The coordinate-free approach to Gauss-Markov estimation. Springer, Heidelberg
Dube M (1999) Mixed regression estimator under inclusion of some superfluous variables. TEST 8(2):411–417
Gan S, Sun Y, Tian Y (2017) Equivalence of predictors under real and over-parameterized linear models. Commun Stat Theory Methods 46(11):5368–5383
Goldberger AS (1962) Best linear unbiased prediction in the generalized linear regression model. J Am Stat Assoc 57(298):369–375
Güler N, Büyükkaya ME (2023) Inertia and rank approach in transformed linear mixed models for comparison of BLUPs. Commun Stat Theory Methods 52(9):3108–3123
Güler N, Puntanen S, Özdemir H (2014) On the BLUEs in two linear models via C. R. Rao’s Pandora’s Box. Commun Stat Theory Methods 43(5):921–931
Güler N, Büyükkaya ME, Yiğit M (2022) Comparison of covariance matrices of predictors in seemingly unrelated regression models. Indian J Pure Appl Math 53:801–809
Hallum CR, Lewis TO, Boullion TL (1973) Estimation in the restricted general linear model with a positive semidefinite covariance matrix. Commun Stat 1(2):157–166
Haslett SJ, Puntanen S (2010) Effect of adding regressors on the equality of the BLUEs under two linear models. J Stat Plan Inference 140(1):104–110
Haupt H, Oberhofer W (2002) Fully restricted linear regression: a pedagogical note. Econ Bull 3(1):1–7
Hodges JS (2014) Richly parameterized linear models: additive, time series, and spatial models using random effects. CRC Press, Boca Raton
Isotalo J, Puntanen S, Styan GPH (2007) Effect of adding regressors on the equality of the OLSE and BLUE. Int J Stat Sci 6:193–201
Jammalamadaka SR, Sengupta D (2007) Inclusion and exclusion of data or parameters in the general linear model. Stat Probab Lett 77(12):1235–1247
Jun SJ, Pinkse J (2009) Adding regressors to obtain efficiency. Econ Theory 25(1):298–301
Kadiyala K (1986) Mixed regression estimator under misspecification. Econ Lett 21(1):27–30
Li W, Tian Y, Yuan R (2023) Statistical analysis of a linear regression model with restrictions and superfluous variables. J Ind Manag Optim 19(5):3107–3127
Löwner K (1934) Über monotone Matrixfunktionen. Math Z 38:177–216
Lu C, Gan S, Tian Y (2015) Some remarks on general linear model with new regressors. Stat Probab Lett 97:16–24
Lu C, Sun Y, Tian Y (2018) A comparison between two competing fixed parameter constrained general linear models with new regressors. Statistics 52(4):769–781
Mathew T (1983) A note on best linear unbiased estimation in the restricted general linear model. Ser Stat 14(1):3–6
Mitra SK, Moore BJ (1973) Gauss-Markov estimation with an incorrect dispersion matrix. Sankhy\(\bar{a}\) Indian J Stat 35(2):139–152
Puntanen S, Styan GPH, Isotalo J (2011) Matrix tricks for linear statistical models: our personal top twenty. Springer, Heidelberg
Qian H, Tian Y (2006) Partially superfluous observations. Econ Theory 22(3):529–536
Rao CR (1973) Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix. J Multivar Anal 3(3):276–292
Ravikumar B, Ray S, Savin NE (2000) Robust wald tests in sur systems with adding-up restrictions. Econometrica 68(3):715–719
Ren X (2015) Corrigendum to On the equivalence of the BLUEs under a general linear model and its restricted and stochastically restricted models [Statist. Probab. Lett. 90 (2014) 1–10]. Stat Probab Lett 104:181–185
Tian Y (2009) On equalities for BLUEs under misspecified Gauss-Markov models. Acta Math Sin Engl Ser 25(11):1907–1920
Tian Y (2010) Equalities and inequalities for inertias of Hermitian matrices with applications. Linear Algebra Appl 433(1):263–296
Tian Y (2012) Solving optimization problems on ranks and inertias of some constrained nonlinear matrix functions via an algebraic linearization method. Nonlinear Anal Theory Methods Appl 75(2):717–734
Tian Y (2013) On properties of BLUEs under general linear regression models. J Stat Plan Inference 143(4):771–782
Tian Y (2017) Some equalities and inequalities for covariance matrices of estimators under linear model. Stat Pap 58(2):467–484
Tian Y, Guo W (2016) On comparison of dispersion matrices of estimators under a constrained linear model. Stat Methods Appl 25(4):623–649
Tian Y, Jiang B (2016) Matrix rank/inertia formulas for least-squares solutions with statistical applications. Spec Matrices 4(1):130–140
Tian Y, Jiang B (2017) A new analysis of the relationships between a general linear model and its mis-specified forms. J Korean Stat Soc 46(2):182–193
Tian Y, Beisiegel M, Dagenais E, Haines C (2008) On the natural restrictions in the singular Gauss-Markov model. Stat Pap 49(3):553–564
Wijekoon P, Trenkler G (1989) Mean square error matrix superiority of estimators under linear restrictions and misspecification. Econ Lett 30(2):141–149
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A
Appendix A
Formulas of inertia and rank of block matrices, used in the proofs of the results in the paper, are given in the following two lemmas; see, Tian (2010).
Lemma A.1
Let \({\textbf{C}}_{1}\), \({\textbf{C}}_{2}\) \(\in {\mathbb {R}}^{k\times n}\), or, let \({\textbf{C}}_{1}={\textbf{C}}_{1}'\), \({\textbf{C}}_{2}={\textbf{C}}_{2}'\) \(\in {\mathbb {R}}^{k\times k}\). Then,
Lemma A.2
Let \({\textbf{C}}={\textbf{C}}' \in {\mathbb {R}}^{k\times k}\), \({\textbf{D}}={\textbf{D}}' \in {\mathbb {R}}^{n\times n}\), \({\textbf{P}}\in {\mathbb {R}}^{k\times n}\), and \(t \in {\mathbb {R}}\). Then,
The constrained quadratic matrix-valued function optimization problem related to minimization problems on dispersion matrices of predictors is given in the following lemma; see Tian (2012).
Lemma A.3
Let \({\textbf{K}}\in {\mathbb {R}}^{n\times p}\) and \({\textbf{D}}\in {\mathbb {R}}^{m\times p}\) be given matrices, and let \({\textbf{Q}}\in {\mathbb {R}}^{n\times n}\) be a symmetric positive semi-definite matrix. Assume that there exists \({\textbf{X}}_{0} \in {\mathbb {R}}^{m\times n}\) such that \({\textbf{X}}_{0}{\textbf{K}}={\textbf{D}}\). Then the maximal positive inertia of \({\textbf{X}}_{0}{\textbf{Q}}{\textbf{X}}_{0}'-{\textbf{X}}{\textbf{Q}}{\textbf{X}}'\) subject to all solutions of \({\textbf{X}}{\textbf{K}}={\textbf{D}}\) is
Hence there exists solution \({\textbf{X}}_{0}\) of \({\textbf{X}}_{0}{\textbf{K}}={\textbf{D}}\) such that \({\textbf{X}}_{0}{\textbf{Q}}{\textbf{X}}_{0}'\preccurlyeq {\textbf{X}}{\textbf{Q}}{\textbf{X}}'\) holds for all solutions of \({\textbf{X}}{\textbf{K}}={\textbf{D}}\) \(\Leftrightarrow \) \({\textbf{X}}_{0}\) satisfies both \({\textbf{X}}_{0}{\textbf{K}}={\textbf{D}}\) and \({\textbf{X}}_{0}{\textbf{Q}}{\textbf{K}}^{\perp }={\textbf{0}}\).
Proof of Lemma 1
Let \({\textbf{T}}{{\widehat{{\textbf{y}}}}}\) be an unbiased linear predictor for \({\textbf{n}}\) in \({{\widehat{{\mathcal {N}}}}}\). Then,
The similar expressions as in (A11) and (A12) can be written for the other unbiased predictor \({\textbf{S}}{{\widehat{{\textbf{y}}}}}\) by setting \({\textbf{S}}\) instead of \({\textbf{T}}\). Finding solution \({\textbf{T}}\) of the consistent equation \({\textbf{T}}\begin{bmatrix}{{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}}\end{bmatrix}={\widehat{{\textbf{N}}}}\) such that
corresponds to find the \({{\,\textrm{BLUP}\,}}\) of \({\textbf{n}}\) under \({{\widehat{{\mathcal {N}}}}}\). (A13) is a standard constrained quadratic matrix-valued function optimization problem in the L\({\ddot{o}}\)wner partial ordering as given in Lemma A.3. Applying this lemma to (A13), (9) is obtained and also from Lemma A.3, we obtain the fundamental equation of \({{\,\textrm{BLUP}\,}}\) of \({\textbf{n}}\) in (10). It is well-known that the general solution of (10) can be written as in (11). Item (1) follows from (11). The expressions in item (2) are well-known results; see also (Tian 2013, Lemma 2.1(a)). The equalities in item (3) is seen from (7) and (11). \(\square \)
Proof of Theorem 1
Let \({\textbf{G}}=\begin{bmatrix}{{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}} \end{bmatrix}\) and \({\textbf{H}}={{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {M}}}}}}({\textbf{n}})}]\). Using (13) and (A10),
is obtained, where \({\textbf{J}}_{n}=\begin{bmatrix}{\textbf{G}},&{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}\). It is seen from (12) and Definition 1 that \({\mathscr {C}}({{\widehat{\varvec{\Gamma }}}})\subseteq {\mathscr {C}}({\textbf{J}}_{n})\) and \({\mathscr {C}}\left( \begin{bmatrix}{\widehat{{\textbf{N}}}},&{\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}'\right) \subseteq {\mathscr {C}}({\textbf{J}}_{n}')\) hold. Then, applying (A10) to (A14), using (A5), (A7), and (12) with the elementary block matrix operations, and setting \({\textbf{J}}_{n}\) into (A14), (A14) is equivalently written as follows
We can reapply (A10) to (A15) after substituting \({\textbf{H}}={{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {M}}}}}}({\textbf{n}})}]\) in (14). Then, by using a similar way to obtaining (A14), (A15) is equivalently written as
where \({\textbf{J}}_{m}=\begin{bmatrix}{{\widehat{{\textbf{X}}}}},&{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp }\end{bmatrix}\). We can apply (A10) to (A16) since \({\mathscr {C}}({{\widehat{\varvec{\Gamma }}}})\subseteq {\mathscr {C}}({\textbf{J}}_{m})\) and \({\mathscr {C}}(\begin{bmatrix}{\textbf{N}},&{\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp }\end{bmatrix}')\subseteq {\mathscr {C}}({\textbf{J}}_{m}')\) hold. Then, by setting \({\textbf{J}}_{m}\), (A16) is equivalently written as
Using (A5)–(A8), and some congruence operations, (A17) is equivalently written as
where (A19) is obtained by substituting \({\textbf{G}}\) in (A18). In consequence, by using (A9) and writing \({\textbf{S}}\) instead of the first matrix in (A19), we obtain the positive and negative inertia of difference \({{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {M}}}}}}({\textbf{n}})}]-{{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {N}}}}}}({\textbf{n}})}]\) as follows:
respectively. The rank of \({{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {M}}}}}}({\textbf{n}})}]-{{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {N}}}}}}({\textbf{n}})}]\) is written as
from (A4), by adding the expressions in (A20). Applying (A1)–(A3) to (A20) and (A21) yields items (1)–(5). \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Güler, N., Büyükkaya, M.E. Further remarks on constrained over-parameterized linear models. Stat Papers 65, 975–988 (2024). https://doi.org/10.1007/s00362-023-01426-z
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-023-01426-z