Skip to main content
Log in

Further remarks on constrained over-parameterized linear models

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This paper considers comparison problems for dispersion matrices of predictors under two competing linear models with having the same restrictions on their joint unknown parameters. One of the competing model is a constrained linear model (CLM) and the other one is a constrained over-parameterized model (COLM), obtained by adding new regressors to the CLM. After converting explicitly CLM and its COLM to their implicitly constrained forms, analytical expressions and properties of the best linear unbiased predictors (BLUPs) are given via using some quadratic matrix optimization methods under the models. In particular, the authors provide some equalities and inequalities for dispersion matrices of BLUPs under the models by using some formulas of ranks and inertias of block matrices. Comparison results on the dispersion matrices are also derived for special cases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Alalouf IS, Styan GPH (1979) Characterizations of estimability in the general linear model. Ann Stat 7(1):194–200

    MathSciNet  Google Scholar 

  • Amemiya T (1985) Advanced econometrics. Basil Blackwell, Oxford

    Google Scholar 

  • Baksalary JK (1984) A study of the equivalence between a Gauss-Markoff model and its augmentation by nuisance parameters. Ser Stat 15(1):3–35

    MathSciNet  Google Scholar 

  • Baksalary JK, Pordzik PR (1989) Inverse-partitioned-matrix method for the general Gauss-Markov model with linear restrictions. J Stat Plan Inference 23(2):133–143

    MathSciNet  Google Scholar 

  • Baksalary JK, Pordzik PR (1990) A note on comparing the unrestricted and restricted least-squares estimators. Linear Algebra Appl 127:371–378

    MathSciNet  Google Scholar 

  • Bhimasankaram P, Jammalamadaka SR (1994) Updates of statistics in a general linear model: A statistical interpretation and applications. Commun Stat Simul Comput 23(3):789–801

    MathSciNet  Google Scholar 

  • Büyükkaya ME (2022) Characterizing relationships between BLUPs under linear mixed model and some associated reduced models. Commun Stat Simul Comput 2022:1–14. https://doi.org/10.1080/03610918.2022.2115071

    Article  Google Scholar 

  • Chipman JS, Rao MM (1964) The treatment of linear restrictions in regression analysis. Econometrica 32(1/2):198–209

    MathSciNet  Google Scholar 

  • Dent WT (1980) On restricted estimation in linear models. J Econ 12(1):49–58

    MathSciNet  Google Scholar 

  • Dong B, Guo W, Tian Y (2014) On relations between BLUEs under two transformed linear models. J Multivar Anal 131:279–292

    MathSciNet  Google Scholar 

  • Drygas H (1970) The coordinate-free approach to Gauss-Markov estimation. Springer, Heidelberg

    Google Scholar 

  • Dube M (1999) Mixed regression estimator under inclusion of some superfluous variables. TEST 8(2):411–417

    MathSciNet  Google Scholar 

  • Gan S, Sun Y, Tian Y (2017) Equivalence of predictors under real and over-parameterized linear models. Commun Stat Theory Methods 46(11):5368–5383

    MathSciNet  Google Scholar 

  • Goldberger AS (1962) Best linear unbiased prediction in the generalized linear regression model. J Am Stat Assoc 57(298):369–375

    MathSciNet  Google Scholar 

  • Güler N, Büyükkaya ME (2023) Inertia and rank approach in transformed linear mixed models for comparison of BLUPs. Commun Stat Theory Methods 52(9):3108–3123

    MathSciNet  Google Scholar 

  • Güler N, Puntanen S, Özdemir H (2014) On the BLUEs in two linear models via C. R. Rao’s Pandora’s Box. Commun Stat Theory Methods 43(5):921–931

    MathSciNet  Google Scholar 

  • Güler N, Büyükkaya ME, Yiğit M (2022) Comparison of covariance matrices of predictors in seemingly unrelated regression models. Indian J Pure Appl Math 53:801–809

    MathSciNet  Google Scholar 

  • Hallum CR, Lewis TO, Boullion TL (1973) Estimation in the restricted general linear model with a positive semidefinite covariance matrix. Commun Stat 1(2):157–166

    MathSciNet  Google Scholar 

  • Haslett SJ, Puntanen S (2010) Effect of adding regressors on the equality of the BLUEs under two linear models. J Stat Plan Inference 140(1):104–110

    MathSciNet  Google Scholar 

  • Haupt H, Oberhofer W (2002) Fully restricted linear regression: a pedagogical note. Econ Bull 3(1):1–7

    Google Scholar 

  • Hodges JS (2014) Richly parameterized linear models: additive, time series, and spatial models using random effects. CRC Press, Boca Raton

    Google Scholar 

  • Isotalo J, Puntanen S, Styan GPH (2007) Effect of adding regressors on the equality of the OLSE and BLUE. Int J Stat Sci 6:193–201

    Google Scholar 

  • Jammalamadaka SR, Sengupta D (2007) Inclusion and exclusion of data or parameters in the general linear model. Stat Probab Lett 77(12):1235–1247

    MathSciNet  Google Scholar 

  • Jun SJ, Pinkse J (2009) Adding regressors to obtain efficiency. Econ Theory 25(1):298–301

    MathSciNet  Google Scholar 

  • Kadiyala K (1986) Mixed regression estimator under misspecification. Econ Lett 21(1):27–30

    MathSciNet  Google Scholar 

  • Li W, Tian Y, Yuan R (2023) Statistical analysis of a linear regression model with restrictions and superfluous variables. J Ind Manag Optim 19(5):3107–3127

    MathSciNet  Google Scholar 

  • Löwner K (1934) Über monotone Matrixfunktionen. Math Z 38:177–216

    MathSciNet  Google Scholar 

  • Lu C, Gan S, Tian Y (2015) Some remarks on general linear model with new regressors. Stat Probab Lett 97:16–24

    MathSciNet  Google Scholar 

  • Lu C, Sun Y, Tian Y (2018) A comparison between two competing fixed parameter constrained general linear models with new regressors. Statistics 52(4):769–781

    MathSciNet  Google Scholar 

  • Mathew T (1983) A note on best linear unbiased estimation in the restricted general linear model. Ser Stat 14(1):3–6

    MathSciNet  Google Scholar 

  • Mitra SK, Moore BJ (1973) Gauss-Markov estimation with an incorrect dispersion matrix. Sankhy\(\bar{a}\) Indian J Stat 35(2):139–152

  • Puntanen S, Styan GPH, Isotalo J (2011) Matrix tricks for linear statistical models: our personal top twenty. Springer, Heidelberg

    Google Scholar 

  • Qian H, Tian Y (2006) Partially superfluous observations. Econ Theory 22(3):529–536

    MathSciNet  Google Scholar 

  • Rao CR (1973) Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix. J Multivar Anal 3(3):276–292

    MathSciNet  Google Scholar 

  • Ravikumar B, Ray S, Savin NE (2000) Robust wald tests in sur systems with adding-up restrictions. Econometrica 68(3):715–719

    MathSciNet  Google Scholar 

  • Ren X (2015) Corrigendum to On the equivalence of the BLUEs under a general linear model and its restricted and stochastically restricted models [Statist. Probab. Lett. 90 (2014) 1–10]. Stat Probab Lett 104:181–185

    ADS  Google Scholar 

  • Tian Y (2009) On equalities for BLUEs under misspecified Gauss-Markov models. Acta Math Sin Engl Ser 25(11):1907–1920

    MathSciNet  Google Scholar 

  • Tian Y (2010) Equalities and inequalities for inertias of Hermitian matrices with applications. Linear Algebra Appl 433(1):263–296

    MathSciNet  Google Scholar 

  • Tian Y (2012) Solving optimization problems on ranks and inertias of some constrained nonlinear matrix functions via an algebraic linearization method. Nonlinear Anal Theory Methods Appl 75(2):717–734

    MathSciNet  Google Scholar 

  • Tian Y (2013) On properties of BLUEs under general linear regression models. J Stat Plan Inference 143(4):771–782

    MathSciNet  Google Scholar 

  • Tian Y (2017) Some equalities and inequalities for covariance matrices of estimators under linear model. Stat Pap 58(2):467–484

    MathSciNet  Google Scholar 

  • Tian Y, Guo W (2016) On comparison of dispersion matrices of estimators under a constrained linear model. Stat Methods Appl 25(4):623–649

    MathSciNet  Google Scholar 

  • Tian Y, Jiang B (2016) Matrix rank/inertia formulas for least-squares solutions with statistical applications. Spec Matrices 4(1):130–140

    MathSciNet  Google Scholar 

  • Tian Y, Jiang B (2017) A new analysis of the relationships between a general linear model and its mis-specified forms. J Korean Stat Soc 46(2):182–193

    MathSciNet  Google Scholar 

  • Tian Y, Beisiegel M, Dagenais E, Haines C (2008) On the natural restrictions in the singular Gauss-Markov model. Stat Pap 49(3):553–564

    MathSciNet  Google Scholar 

  • Wijekoon P, Trenkler G (1989) Mean square error matrix superiority of estimators under linear restrictions and misspecification. Econ Lett 30(2):141–149

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nesrin Güler.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

Formulas of inertia and rank of block matrices, used in the proofs of the results in the paper, are given in the following two lemmas; see, Tian (2010).

Lemma A.1

Let \({\textbf{C}}_{1}\), \({\textbf{C}}_{2}\) \(\in {\mathbb {R}}^{k\times n}\), or, let \({\textbf{C}}_{1}={\textbf{C}}_{1}'\), \({\textbf{C}}_{2}={\textbf{C}}_{2}'\) \(\in {\mathbb {R}}^{k\times k}\). Then,

$$\begin{aligned}{} & {} {\varvec{r}}({\textbf{C}}_{1}-{\textbf{C}}_{2})=0 \Leftrightarrow {\textbf{C}}_{1}={\textbf{C}}_{2}. \end{aligned}$$
(A1)
$$\begin{aligned}{} & {} {\varvec{i}}_{-}({\textbf{C}}_{1}-{\textbf{C}}_{2})=k \Leftrightarrow {\textbf{C}}_{1} \prec {\textbf{C}}_{2} \hbox { and } {\varvec{i}}_{+}({\textbf{C}}_{1}-{\textbf{C}}_{2})=k \Leftrightarrow {\textbf{C}}_{1} \succ {\textbf{C}}_{2}. \end{aligned}$$
(A2)
$$\begin{aligned}{} & {} {\varvec{i}}_{-}({\textbf{C}}_{1}-{\textbf{C}}_{2})=0 \Leftrightarrow {\textbf{C}}_{1}\succcurlyeq {\textbf{C}}_{2} \hbox { and } {\varvec{i}}_{+}({\textbf{C}}_{1}-{\textbf{C}}_{2})=0 \Leftrightarrow {\textbf{C}}_{1}\preccurlyeq {\textbf{C}}_{2}. \end{aligned}$$
(A3)

Lemma A.2

Let \({\textbf{C}}={\textbf{C}}' \in {\mathbb {R}}^{k\times k}\), \({\textbf{D}}={\textbf{D}}' \in {\mathbb {R}}^{n\times n}\), \({\textbf{P}}\in {\mathbb {R}}^{k\times n}\), and \(t \in {\mathbb {R}}\). Then,

$$\begin{aligned}{} & {} {\varvec{r}}({\textbf{C}})={\varvec{i}}_{+}({\textbf{C}})+{\varvec{i}}_{-}({\textbf{C}}). \end{aligned}$$
(A4)
$$\begin{aligned}{} & {} {\varvec{i}}_{\pm }(t{\textbf{C}})={\varvec{i}}_{\pm }({\textbf{C}}) \hbox { if } t>0 \; \hbox { and } \; {\varvec{i}}_{\pm }(t{\textbf{C}})={\varvec{i}}_{\mp }({\textbf{C}}) \hbox { if } t<0. \end{aligned}$$
(A5)
$$\begin{aligned}{} & {} {\varvec{i}}_{\pm }\begin{bmatrix}{\textbf{C}}&{} {\textbf{P}}\\ {\textbf{P}}' &{} {\textbf{D}}\end{bmatrix}={\varvec{i}}_{\pm }\begin{bmatrix}{\textbf{C}}&{} -{\textbf{P}}\\ -{\textbf{P}}' &{} {\textbf{D}}\end{bmatrix}={\varvec{i}}_{\mp }\begin{bmatrix}-{\textbf{C}}&{} {\textbf{P}}\\ {\textbf{P}}' &{} -{\textbf{D}}\end{bmatrix}. \end{aligned}$$
(A6)
$$\begin{aligned}{} & {} {\varvec{i}}_{\pm }\begin{bmatrix}{\textbf{C}}&{} {\textbf{0}}\\ {\textbf{0}}&{} {\textbf{D}}\end{bmatrix}={\varvec{i}}_{\pm }({\textbf{C}})+{\varvec{i}}_{\pm }({\textbf{D}}) \; \hbox { and } \; {\varvec{i}}_{+}\begin{bmatrix}{\textbf{0}}&{} {\textbf{P}}\\ {\textbf{P}}' &{} {\textbf{0}}\end{bmatrix}={\varvec{i}}_{-}\begin{bmatrix}{\textbf{0}}&{} {\textbf{P}}\\ {\textbf{P}}' &{} {\textbf{0}}\end{bmatrix}={\varvec{r}}({\textbf{P}}). \end{aligned}$$
(A7)
$$\begin{aligned}{} & {} {\varvec{i}}_{\pm }\begin{bmatrix}{\textbf{C}}&{} {\textbf{P}}\\ {\textbf{P}}' &{} {\textbf{0}}\end{bmatrix}={\varvec{r}}({\textbf{P}})+{\varvec{i}}_{\pm }({\textbf{E}}_{{\textbf{P}}}{\textbf{C}}{\textbf{E}}_{{\textbf{P}}}). \end{aligned}$$
(A8)
$$\begin{aligned}{} & {} {\varvec{i}}_{+}\begin{bmatrix}{\textbf{C}}&{} {\textbf{P}}\\ {\textbf{P}}' &{} {\textbf{0}}\end{bmatrix}={\varvec{r}}\begin{bmatrix}{\textbf{C}},&{\textbf{P}}\end{bmatrix} \;\; \hbox { and } \;\; {\varvec{i}}_{-}\begin{bmatrix}{\textbf{C}}&{} {\textbf{P}}\\ {\textbf{P}}' &{} {\textbf{0}}\end{bmatrix}={\varvec{r}}({\textbf{P}}) \;\; \hbox { if } \;\; {\textbf{C}}\succcurlyeq {\textbf{0}}. \end{aligned}$$
(A9)
$$\begin{aligned}{} & {} {\varvec{i}}_{\pm }\begin{bmatrix}{\textbf{C}}&{} {\textbf{P}}\\ {\textbf{P}}' &{} {\textbf{D}}\end{bmatrix}={\varvec{i}}_{\pm }({\textbf{C}})+ {\varvec{i}}_{\pm }({\textbf{D}}-{\textbf{P}}'{\textbf{C}}^{+}{\textbf{P}}) \;\; \hbox { if } \;\; {\mathscr {C}}({\textbf{P}}) \subseteq {\mathscr {C}}({\textbf{C}}). \end{aligned}$$
(A10)

The constrained quadratic matrix-valued function optimization problem related to minimization problems on dispersion matrices of predictors is given in the following lemma; see Tian (2012).

Lemma A.3

Let \({\textbf{K}}\in {\mathbb {R}}^{n\times p}\) and \({\textbf{D}}\in {\mathbb {R}}^{m\times p}\) be given matrices, and let \({\textbf{Q}}\in {\mathbb {R}}^{n\times n}\) be a symmetric positive semi-definite matrix. Assume that there exists \({\textbf{X}}_{0} \in {\mathbb {R}}^{m\times n}\) such that \({\textbf{X}}_{0}{\textbf{K}}={\textbf{D}}\). Then the maximal positive inertia of \({\textbf{X}}_{0}{\textbf{Q}}{\textbf{X}}_{0}'-{\textbf{X}}{\textbf{Q}}{\textbf{X}}'\) subject to all solutions of \({\textbf{X}}{\textbf{K}}={\textbf{D}}\) is

$$\begin{aligned} \max _{{\textbf{X}}{\textbf{K}}={\textbf{D}}} {\varvec{i}}_{+}({\textbf{X}}_{0}{\textbf{Q}}{\textbf{X}}_{0}'-{\textbf{X}}{\textbf{Q}}{\textbf{X}}') ={\varvec{r}}\begin{bmatrix} {\textbf{X}}_{0}{\textbf{Q}}\\ {\textbf{K}}' \end{bmatrix}-{\varvec{r}}({\textbf{K}}) ={\varvec{r}}({\textbf{X}}_{0}{\textbf{Q}}{\textbf{K}}^{\perp }). \end{aligned}$$

Hence there exists solution \({\textbf{X}}_{0}\) of \({\textbf{X}}_{0}{\textbf{K}}={\textbf{D}}\) such that \({\textbf{X}}_{0}{\textbf{Q}}{\textbf{X}}_{0}'\preccurlyeq {\textbf{X}}{\textbf{Q}}{\textbf{X}}'\) holds for all solutions of \({\textbf{X}}{\textbf{K}}={\textbf{D}}\) \(\Leftrightarrow \) \({\textbf{X}}_{0}\) satisfies both \({\textbf{X}}_{0}{\textbf{K}}={\textbf{D}}\) and \({\textbf{X}}_{0}{\textbf{Q}}{\textbf{K}}^{\perp }={\textbf{0}}\).

Proof of Lemma 1

Let \({\textbf{T}}{{\widehat{{\textbf{y}}}}}\) be an unbiased linear predictor for \({\textbf{n}}\) in \({{\widehat{{\mathcal {N}}}}}\). Then,

$$\begin{aligned}&{{\,\textrm{E}\,}}({\textbf{T}}{{\widehat{{\textbf{y}}}}}-{\textbf{n}}) ={\textbf{0}}\Leftrightarrow {\textbf{T}}\begin{bmatrix}{{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}}\end{bmatrix}={\widehat{{\textbf{N}}}}, \; \hbox { i.e., } \;\begin{bmatrix}{\textbf{T}},&-{\textbf{I}}_{t}\end{bmatrix}\begin{bmatrix}{{\widehat{{\textbf{X}}}}} &{} {{\widehat{{\textbf{A}}}}}\\ {\textbf{N}}&{} {\textbf{0}}\end{bmatrix}&={\textbf{0}}, \end{aligned}$$
(A11)
$$\begin{aligned}&{{\,\textrm{D}\,}}({\textbf{T}}{{\widehat{{\textbf{y}}}}}-{\textbf{n}}) =\varvec{\sigma }^2({\textbf{T}}-{\textbf{R}}){{\widehat{\varvec{\Gamma }}}}({\textbf{T}}-{\textbf{R}})' \nonumber \\&\quad =\varvec{\sigma }^2\begin{bmatrix}{\textbf{T}},&-{\textbf{I}}_{t}\end{bmatrix}\begin{bmatrix}{\textbf{I}}_{n+m}\\ {\textbf{R}}\end{bmatrix}{{\widehat{\varvec{\Gamma }}}}\begin{bmatrix}{\textbf{I}}_{n+m}\\ {\textbf{R}}\end{bmatrix}'\begin{bmatrix}{\textbf{T}},&-{\textbf{I}}_{t}\end{bmatrix}':={\varvec{f}}({\textbf{T}}). \end{aligned}$$
(A12)

The similar expressions as in (A11) and (A12) can be written for the other unbiased predictor \({\textbf{S}}{{\widehat{{\textbf{y}}}}}\) by setting \({\textbf{S}}\) instead of \({\textbf{T}}\). Finding solution \({\textbf{T}}\) of the consistent equation \({\textbf{T}}\begin{bmatrix}{{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}}\end{bmatrix}={\widehat{{\textbf{N}}}}\) such that

$$\begin{aligned} \begin{aligned} {\varvec{f}}({\textbf{T}}) \preccurlyeq {\varvec{f}}({\textbf{S}}) \; \hbox { s.t. } \; {\textbf{S}}\begin{bmatrix}{{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}}\end{bmatrix}={\widehat{{\textbf{N}}}} \end{aligned} \end{aligned}$$
(A13)

corresponds to find the \({{\,\textrm{BLUP}\,}}\) of \({\textbf{n}}\) under \({{\widehat{{\mathcal {N}}}}}\). (A13) is a standard constrained quadratic matrix-valued function optimization problem in the L\({\ddot{o}}\)wner partial ordering as given in Lemma A.3. Applying this lemma to (A13), (9) is obtained and also from Lemma A.3, we obtain the fundamental equation of \({{\,\textrm{BLUP}\,}}\) of \({\textbf{n}}\) in (10). It is well-known that the general solution of (10) can be written as in (11). Item (1) follows from (11). The expressions in item (2) are well-known results; see also (Tian 2013, Lemma 2.1(a)). The equalities in item (3) is seen from (7) and (11). \(\square \)

Proof of Theorem 1

Let \({\textbf{G}}=\begin{bmatrix}{{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}} \end{bmatrix}\) and \({\textbf{H}}={{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {M}}}}}}({\textbf{n}})}]\). Using (13) and (A10),

$$\begin{aligned}&{\varvec{i}}_{\pm }({\textbf{H}}-{{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {N}}}}}}({\textbf{n}})}])\nonumber \\&\quad ={\varvec{i}}_{\pm }\left( {\textbf{H}}-\left( \begin{bmatrix}{\widehat{{\textbf{N}}}},&{\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}{\textbf{J}}_{n}^{+} -{\textbf{R}}\right) {{\widehat{\varvec{\Gamma }}}}{{\widehat{\varvec{\Gamma }}}}^{+}{{\widehat{\varvec{\Gamma }}}}\left( \begin{bmatrix}{\widehat{{\textbf{N}}}},&{\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}{\textbf{J}}_{n}^{+} -{\textbf{R}}\right) '\right) \nonumber \\&\quad ={\varvec{i}}_{\pm }\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}}&{{\widehat{\varvec{\Gamma }}}}\left( \begin{bmatrix}{\widehat{{\textbf{N}}}}, &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}{\textbf{J}}_{n}^{+} -{\textbf{R}}\right) ' \\ \left( \begin{bmatrix}{\widehat{{\textbf{N}}}}, &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}{\textbf{J}}_{n}^{+} -{\textbf{R}}\right) {{\widehat{\varvec{\Gamma }}}}&{\textbf{H}}\end{bmatrix}-{\varvec{i}}_{\pm }({{\widehat{\varvec{\Gamma }}}})\nonumber \\&\quad ={\varvec{i}}_{\pm }\left( \begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} -{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' \\ -{\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{H}}\end{bmatrix} +\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}\\ {\textbf{0}}&{} \begin{bmatrix}{\widehat{{\textbf{N}}}}, &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix} \end{bmatrix} \begin{bmatrix} {\textbf{0}}&{} {\textbf{J}}_{n} \\ {\textbf{J}}_{n}' &{} {\textbf{0}}\end{bmatrix}^{+} \begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}\\ {\textbf{0}}&{} \begin{bmatrix}{\widehat{{\textbf{N}}}}, &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}' \end{bmatrix}\right) \nonumber \\&\qquad -{\varvec{i}}_{\pm }({{\widehat{\varvec{\Gamma }}}}) \end{aligned}$$
(A14)

is obtained, where \({\textbf{J}}_{n}=\begin{bmatrix}{\textbf{G}},&{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}\). It is seen from (12) and Definition 1 that \({\mathscr {C}}({{\widehat{\varvec{\Gamma }}}})\subseteq {\mathscr {C}}({\textbf{J}}_{n})\) and \({\mathscr {C}}\left( \begin{bmatrix}{\widehat{{\textbf{N}}}},&{\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }\end{bmatrix}'\right) \subseteq {\mathscr {C}}({\textbf{J}}_{n}')\) hold. Then, applying (A10) to (A14), using (A5), (A7), and (12) with the elementary block matrix operations, and setting \({\textbf{J}}_{n}\) into (A14), (A14) is equivalently written as follows

$$\begin{aligned}&{\varvec{i}}_{\pm }\begin{bmatrix} {\textbf{0}}&{} -{\textbf{G}}&{} -{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp } &{} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}\\ -{\textbf{G}}' &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\widehat{{\textbf{N}}}}'\\ -{\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' \\ {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{\varvec{\Gamma }}}} &{} -{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' \\ {\textbf{0}}&{} {\widehat{{\textbf{N}}}} &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp } &{} -{\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{H}}\end{bmatrix}- {\varvec{i}}_{\mp }\begin{bmatrix} {\textbf{0}}&{} {\textbf{G}}&{} {{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp } \\ {\textbf{G}}' &{} {\textbf{0}}&{} {\textbf{0}}\\ {\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}\end{bmatrix} -{\varvec{i}}_{\pm }({{\widehat{\varvec{\Gamma }}}})\nonumber \\&\quad ={\varvec{i}}_{\pm }\begin{bmatrix} -{{\widehat{\varvec{\Gamma }}}} &{} -{\textbf{G}}&{} -{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp } &{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' \\ -{\textbf{G}}' &{} {\textbf{0}}&{} {\textbf{0}}&{} {\widehat{{\textbf{N}}}}'\\ -{\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' \\ {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\widehat{{\textbf{N}}}} &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp } &{} {\textbf{H}}-{\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' \end{bmatrix}-{\varvec{r}}\begin{bmatrix} {\textbf{G}},&{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp } \end{bmatrix}\nonumber \\&\quad ={\varvec{i}}_{\mp }\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{G}}&{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' \\ {\textbf{G}}' &{} {\textbf{0}}&{} {\widehat{{\textbf{N}}}}' \\ {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\widehat{{\textbf{N}}}} &{} -{\textbf{H}}+{\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' \end{bmatrix}+ {\varvec{i}}_{\pm }({\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp })-{\varvec{r}}\begin{bmatrix} {\textbf{G}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}\nonumber \\&\quad ={\varvec{i}}_{\mp }\left( \begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{G}}\\ {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\widehat{{\textbf{N}}}} \\ {\textbf{G}}' &{} {\widehat{{\textbf{N}}}}' &{} {\textbf{0}}\end{bmatrix}-\begin{bmatrix} {\textbf{0}}\\ {\textbf{I}}_{t} \\ {\textbf{0}}\end{bmatrix}{\textbf{H}}\begin{bmatrix} {\textbf{0}}\\ {\textbf{I}}_{t} \\ {\textbf{0}}\end{bmatrix}'\right) + {\varvec{i}}_{\pm }({\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp })-{\varvec{r}}\begin{bmatrix} {\textbf{G}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}. \end{aligned}$$
(A15)

We can reapply (A10) to (A15) after substituting \({\textbf{H}}={{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {M}}}}}}({\textbf{n}})}]\) in (14). Then, by using a similar way to obtaining (A14), (A15) is equivalently written as

$$\begin{aligned}&{\varvec{i}}_{\mp }\left( \begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} -{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}'&{} {\textbf{0}}\\ {\textbf{0}}&{} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{G}}\\ -{\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\widehat{{\textbf{N}}}} \\ {\textbf{0}}&{} {\textbf{G}}' &{} {\widehat{{\textbf{N}}}}' &{} {\textbf{0}}\end{bmatrix} +\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}\\ {\textbf{0}}&{} {\textbf{0}}\\ {\textbf{0}}&{} \begin{bmatrix}{\textbf{N}}, &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp }\end{bmatrix} \\ {\textbf{0}}&{} {\textbf{0}}\end{bmatrix} \begin{bmatrix} {\textbf{0}}&{} {\textbf{J}}_{m} \\ {\textbf{J}}_{m}' &{} {\textbf{0}}\end{bmatrix}^{+} \right. \nonumber \\&\quad \left. \times \begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}\\ {\textbf{0}}&{} {\textbf{0}}&{} \begin{bmatrix}{\textbf{N}}, &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp }\end{bmatrix}'&{\textbf{0}}\end{bmatrix}\right) + {\varvec{i}}_{\pm }({\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp })-{\varvec{r}}\begin{bmatrix} {\textbf{G}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}-{\varvec{i}}_{\mp }({{\widehat{\varvec{\Gamma }}}}), \end{aligned}$$
(A16)

where \({\textbf{J}}_{m}=\begin{bmatrix}{{\widehat{{\textbf{X}}}}},&{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp }\end{bmatrix}\). We can apply (A10) to (A16) since \({\mathscr {C}}({{\widehat{\varvec{\Gamma }}}})\subseteq {\mathscr {C}}({\textbf{J}}_{m})\) and \({\mathscr {C}}(\begin{bmatrix}{\textbf{N}},&{\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp }\end{bmatrix}')\subseteq {\mathscr {C}}({\textbf{J}}_{m}')\) hold. Then, by setting \({\textbf{J}}_{m}\), (A16) is equivalently written as

$$\begin{aligned}&{\varvec{i}}_{\mp }\begin{bmatrix} {\textbf{0}}&{} -{{\widehat{{\textbf{X}}}}} &{} -{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp } &{} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}\\ -{{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{N}}' &{} {\textbf{0}}\\ -{{\widehat{{\textbf{X}}}}}^{\perp }{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{{\textbf{X}}}}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{0}}\\ {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} -{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}'&{} {\textbf{0}}\\ {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{G}}\\ {\textbf{0}}&{} {\textbf{N}}&{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp } &{} -{\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\widehat{{\textbf{N}}}}\\ {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{G}}' &{} {\widehat{{\textbf{N}}}}' &{} {\textbf{0}}\end{bmatrix}- {\varvec{i}}_{\pm }\begin{bmatrix} {\textbf{0}}&{} {{\widehat{{\textbf{X}}}}} &{} {{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp } \\ {{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}&{} {\textbf{0}}\\ {{\widehat{{\textbf{X}}}}}^{\perp }{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}\end{bmatrix} \nonumber \\&\quad + {\varvec{i}}_{\pm }({\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp })-{\varvec{r}}\begin{bmatrix} {\textbf{G}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}-{\varvec{i}}_{\mp }({{\widehat{\varvec{\Gamma }}}}). \end{aligned}$$
(A17)

Using (A5)–(A8), and some congruence operations, (A17) is equivalently written as

$$\begin{aligned}&{\varvec{i}}_{\mp }\begin{bmatrix} -{{\widehat{\varvec{\Gamma }}}} &{} -{{\widehat{{\textbf{X}}}}} &{} -{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp } &{} {\textbf{0}}&{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{0}}\\ -{{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{N}}' &{} {\textbf{0}}\\ -{{\widehat{{\textbf{X}}}}}^{\perp }{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{{\textbf{X}}}}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{0}}\\ {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{G}}\\ {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{N}}&{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp } &{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\widehat{{\textbf{N}}}} \\ {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{G}}' &{} {\widehat{{\textbf{N}}}}' &{} {\textbf{0}}\end{bmatrix}- {\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp } \end{bmatrix} + {\varvec{i}}_{\pm }({\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp }) \nonumber \\&\qquad -{\varvec{r}}\begin{bmatrix} {\textbf{G}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix} \nonumber \\&\quad ={\varvec{i}}_{\mp }\begin{bmatrix} -{{\widehat{\varvec{\Gamma }}}} &{} -{{\widehat{{\textbf{X}}}}} &{} {\textbf{0}}&{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{0}}\\ -{{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{N}}' &{} {\textbf{0}}\\ {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{G}}\\ {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{N}}&{} {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\widehat{{\textbf{N}}}} \\ {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{G}}' &{} {\widehat{{\textbf{N}}}}' &{} {\textbf{0}}\end{bmatrix}+{\varvec{i}}_{\mp }({{\widehat{{\textbf{X}}}}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{{\widehat{{\textbf{X}}}}}^{\perp })- {\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix} + {\varvec{i}}_{\pm }({\textbf{G}}^{\perp }{{\widehat{\varvec{\Gamma }}}}{\textbf{G}}^{\perp })\nonumber \\&\qquad -{\varvec{r}}\begin{bmatrix} {\textbf{G}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix} \nonumber \\&\quad ={\varvec{i}}_{\pm }\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{0}}&{} {{\widehat{{\textbf{X}}}}} \\ {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{G}}&{} {\textbf{0}}\\ {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\widehat{{\textbf{N}}}} &{} {\textbf{0}}\\ {\textbf{0}}&{} {\textbf{G}}' &{} {\widehat{{\textbf{N}}}}' &{} {\textbf{0}}&{} {\textbf{0}}\\ {{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}\end{bmatrix} +{\varvec{i}}_{\mp }\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{{\textbf{X}}}}}\\ {{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}\end{bmatrix} -{\varvec{r}}({{\widehat{{\textbf{X}}}}}) -{\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix} +{\varvec{i}}_{\pm }\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{G}}\\ {\textbf{G}}' &{} {\textbf{0}}\end{bmatrix}\nonumber \\&\qquad -{\varvec{r}}({\textbf{G}})-{\varvec{r}}\begin{bmatrix} {\textbf{G}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}\end{aligned}$$
(A18)
$$\begin{aligned}&\quad ={\varvec{i}}_{\pm }\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{\varvec{\Gamma }}}}{\textbf{R}}' &{} {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{{\textbf{X}}}}} \\ {{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {{\widehat{{\textbf{X}}}}} &{} {{\widehat{{\textbf{A}}}}} &{} {\textbf{0}}\\ {\textbf{R}}{{\widehat{\varvec{\Gamma }}}} &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{N}}&{} {\textbf{0}}&{} {\textbf{0}}\\ {\textbf{0}}&{} {{\widehat{{\textbf{X}}}}}' &{} {\textbf{N}}' &{} {\textbf{0}}&{} {\textbf{0}}&{}{\textbf{0}}\\ {\textbf{0}}&{} {{\widehat{{\textbf{A}}}}}' &{}{\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{}{\textbf{0}}\\ {{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}&{} {\textbf{0}}\end{bmatrix}+{\varvec{i}}_{\mp }\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{{\textbf{X}}}}}\\ {{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}\end{bmatrix} -{\varvec{r}}({{\widehat{{\textbf{X}}}}}) -{\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix} +{\varvec{i}}_{\pm }\begin{bmatrix} {{\widehat{\varvec{\Gamma }}}} &{} {{\widehat{{\textbf{X}}}}} &{} {{\widehat{{\textbf{A}}}}}\\ {{\widehat{{\textbf{X}}}}}' &{} {\textbf{0}}&{} {\textbf{0}}\\ {{\widehat{{\textbf{A}}}}}' &{} {\textbf{0}}&{} {\textbf{0}}\end{bmatrix}\nonumber \\&\qquad -{\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}} \end{bmatrix}-{\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}, \end{aligned}$$
(A19)

where (A19) is obtained by substituting \({\textbf{G}}\) in (A18). In consequence, by using (A9) and writing \({\textbf{S}}\) instead of the first matrix in (A19), we obtain the positive and negative inertia of difference \({{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {M}}}}}}({\textbf{n}})}]-{{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {N}}}}}}({\textbf{n}})}]\) as follows:

$$\begin{aligned} \begin{aligned} {\varvec{i}}_{+}({\textbf{S}})={\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}-{\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}} \end{bmatrix} \;\hbox { and } \; {\varvec{i}}_{-}({\textbf{S}})={\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}-{\varvec{r}}({{\widehat{{\textbf{X}}}}}), \end{aligned} \end{aligned}$$
(A20)

respectively. The rank of \({{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {M}}}}}}({\textbf{n}})}]-{{\,\textrm{D}\,}}[{{\textbf{n}}-{{\,\textrm{BLUP}\,}}_{{{\widehat{{\mathcal {N}}}}}}({\textbf{n}})}]\) is written as

$$\begin{aligned} \begin{aligned} {\varvec{r}}({\textbf{S}})={\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}-{\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}} \end{bmatrix}-{\varvec{r}}\begin{bmatrix} {{\widehat{{\textbf{X}}}}},&{{\widehat{{\textbf{A}}}}},&{{\widehat{\varvec{\Gamma }}}} \end{bmatrix}-{\varvec{r}}({{\widehat{{\textbf{X}}}}}) \end{aligned} \end{aligned}$$
(A21)

from (A4), by adding the expressions in (A20). Applying (A1)–(A3) to (A20) and (A21) yields items (1)–(5). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Güler, N., Büyükkaya, M.E. Further remarks on constrained over-parameterized linear models. Stat Papers 65, 975–988 (2024). https://doi.org/10.1007/s00362-023-01426-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-023-01426-z

Keywords

Navigation