Skip to main content
Log in

The effect of errors-in-variables on variance component estimation

  • Original Article
  • Published:
Journal of Geodesy Aims and scope Submit manuscript

Abstract

Although total least squares (TLS) has been widely applied, variance components in an errors-in-variables (EIV) model can be inestimable under certain conditions and unstable in the sense that small random errors can result in very large errors in the estimated variance components. We investigate the effect of the random design matrix on variance component (VC) estimation of MINQUE type by treating the design matrix as if it were errors-free, derive the first-order bias of the VC estimate, and construct bias-corrected VC estimators. As a special case, we obtain a bias-corrected estimate for the variance of unit weight. Although TLS methods are statistically rigorous, they can be computationally too expensive. We directly Taylor-expand the nonlinear weighted LS estimate of parameters up to the second-order approximation in terms of the random errors of the design matrix, derive the bias of the estimate, and use it to construct a bias-corrected weighted LS estimate. Bearing in mind that the random errors of the design matrix will create a bias in the normal matrix of the weighted LS estimate, we propose to calibrate the normal matrix by computing and then removing the bias from the normal matrix. As a result, we can obtain a new parameter estimate, which is called the N-calibrated weighted LS estimate. The simulations have shown that (i) errors-in-variables have a significant effect on VC estimation, if they are large/significant but treated as non-random. The variance components can be incorrectly estimated by more than one order of magnitude, depending on the nature of problems and the sizes of EIV; (ii) the bias-corrected VC estimate can effectively remove the bias of the VC estimate. If the signal-to-noise is small, higher order terms may be necessary. Nevertheless, since we construct the bias-corrected VC estimate by directly removing the estimated bias from the estimate itself, the simulation results have clearly indicated that there is a great risk to obtain negative values for variance components. VC estimation in EIV models remains difficult and challenging; and (iii) both the bias-corrected weighted LS estimate and the N-calibrated weighted LS estimate obviously outperform the weighted LS estimate. The intuitively N-calibrated weighted LS estimate is computationally less expensive and shown to statistically perform even better than the bias-corrected weighted LS estimate in producing an almost unbiased estimate of parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Adcock RJ (1877) Note on the method of least squares. Analyst 4:183–184

    Article  Google Scholar 

  • Amiri-Simkooei A, Jazaeri S (2012) Weighted total least squares formulated by standard least squares theory. J Geod Sci 2:113–124

    Google Scholar 

  • Davies RB, Hutton B (1975) The effect of errors in the independent variables in linear regression. Biometrika 62:383–391

    Article  Google Scholar 

  • de Boor B (1993) Structured total least squares and \(L_2\) approximation problems. Linear Algebra Appl 189:163–205

    Google Scholar 

  • Deming WE (1931) The application of least squares. Philos Mag 11:146–158

    Article  Google Scholar 

  • Deming WE (1934) On the application of least squares—II. Philos Mag 17:804–829

  • Fang X (2013) Weighted total least squares: necessary and sufficient conditions, fixed and random parameters. J Geod 87:733–749

    Article  Google Scholar 

  • Felus YA (2004) Application of total least squares for spatial point process analysis. J Surv Eng 130:126–133

    Article  Google Scholar 

  • Gerhold GA (1969) Least-squares adjustment of weighted data to a general linear equation. Am J Phys 37:156–161

    Article  Google Scholar 

  • Golub GH, van Loan CF (1980) An analysis of the total least squares problem. SIAM J Numer Anal 17:883–893

    Article  Google Scholar 

  • Hodges SD, Moore PG (1972) Data uncertainties and least squares regression. Appl Stat 21:185–195

    Article  Google Scholar 

  • Koch K-R (1999) Parameter estimation and hypothesis testing in linear models, 2nd edn. Springer, Berlin

    Book  Google Scholar 

  • Kummell CH (1879) Reduction of observation equations which contain more than one observed quantity. Analyst 6:97–105

    Article  Google Scholar 

  • Magnus JR, Neudecker H (1988) Matrix differential calculus with applications in statistics and econometrics. Wiley, New York

    Google Scholar 

  • Mann ME, Emanuel KA (2006) Atlantic hurricane trends linked to climate change. EOS 87(24):233–241

    Article  Google Scholar 

  • Neitzel F (2010) Generalization of total least-squares on example of unweighted and weighted 2D similarity transformation. J Geod 84:751–762

    Article  Google Scholar 

  • Pearson K (1901) On lines and planes of closest fit to systems of points in space. Philos Mag 2:559–572

    Article  Google Scholar 

  • Rao CR (1971) Estimation of variance and covariance components—MINQUE theory. J Multivar Anal 1:257–275

    Article  Google Scholar 

  • Rao CR, Kleffe J (1988) Estimation of variance components and applications. North-Holland, Amsterdam

    Google Scholar 

  • Schaffrin B, Wieser A (2008) On weighted total least-squares adjustment for linear regression. J Geod 82:415–421

    Article  Google Scholar 

  • Schaffrin B, Felus YA (2008) On the multivariate total least-squares approach to empirical coordinate transformations, Three algorithms. J Geod 82:373–383

    Article  Google Scholar 

  • Searle SR (1971) Linear models. Wiley, New York

    Google Scholar 

  • Seber G, Wild C (1989) Nonlinear regression. Wiley, New York

    Book  Google Scholar 

  • Shi Y, Xu PL (2015) Comparing the estimates of the variance of unit weight in multiplicative error models. Acta Geod Geophys 50:353–363

    Article  Google Scholar 

  • Shi Y, Xu PL, Liu JN, Shi C (2015) Alternative formulae for parameter estimation in partial errors-in-variables models. J Geod 89:13–16

    Article  Google Scholar 

  • Snow K (2012) Topics in total least-squares adjustment within the errors-in-variables model: singular cofactor matrices and prior information. Technical report no. 502, Geodetic Science, The Ohio State University, Columbus

  • van Huffel S, Vandewalle J (1991) The total least squares problem: computational aspects and analysis. SIAM, Philadelphia

    Book  Google Scholar 

  • Xu PL (2004) Determination of stress tensors from fault-slip data. Geophys J Int 157:1316–1330

    Article  Google Scholar 

  • Xu PL (2009) Iterative generalized cross-validation for fusing heteroscedastic data of inverse ill-posed problems. Geophys J Int 179:182–200

    Article  Google Scholar 

  • Xu PL, Liu JN (2013) Variance components in errors-in-variables models: estimability, stability and bias analysis. Invited talk, VIII Hotine-Marussi symposium on mathematical geodesy, Rome, 17–21 June 2013

  • Xu PL, Liu JN (2014) Variance components in errors-in-variables models: estimability, stability and bias analysis. J Geod 88:719–734

    Article  Google Scholar 

  • Xu PL, Shen YZ, Fukuda Y, Liu YM (2006) Variance component estimation in inverse ill-posed linear models. J Geod 80:69–81

    Article  Google Scholar 

  • Xu PL, Liu JN, Shi C (2012) Total least squares adjustment in partial errors-in-variables models: algorithm and statistical analysis. J Geod 86:661–675

    Article  Google Scholar 

  • Xu PL, Liu JN, Zeng W, Shen YZ (2014) Effects of errors-in-variables on weighted least squares estimation. J Geod 88:705–716

    Article  Google Scholar 

Download references

Acknowledgments

This work is partially supported by a Grant-in-Aid for Scientific Research (C25400449). The author thanks the three anonymous reviewers very much for their constructive comments, which have helped to improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peiliang Xu.

Appendices: the derivations of \(\mathbf {S}_{1ij}(\mathbf {E}_A)\) and \(\mathbf {S}_{2ij}(\mathbf {E}_A^2)\), the expectation of \(\mathbf {S}_2(\mathbf {E}_A^2)\), and the second-order approximation of \(\hat{\varvec{\beta }}\)

Appendices: the derivations of \(\mathbf {S}_{1ij}(\mathbf {E}_A)\) and \(\mathbf {S}_{2ij}(\mathbf {E}_A^2)\), the expectation of \(\mathbf {S}_2(\mathbf {E}_A^2)\), and the second-order approximation of \(\hat{\varvec{\beta }}\)

1.1 Appendix A: the derivations of \(\mathbf {S}_{1ij}(\mathbf {E}_A)\) and \(\mathbf {S}_{2ij}(\mathbf {E}_A^2)\)

To find the linear and second-order terms of \(\mathbf {S}_{ij}\) with respect to \(\mathbf {E}_A\), we will expand \(\mathbf {S}_{ij}=\mathbf {P}\mathbf {U}_{iy}\mathbf {P}\mathbf {U}_{jy}\) into Taylor series but truncate it up to the second-order approximation at the point of \(\overline{\mathbf {A}}\), namely

$$\begin{aligned} \mathbf {S}_{ij} = \overline{\mathbf {S}}_{ij} + d\mathbf {S}_{ij} + \frac{1}{2}d^2\mathbf {S}_{ij}. \end{aligned}$$
(44)

After replacing the differential \(d\mathbf {A}\) with \(\mathbf {E}_A\), we can then readily obtain \(\mathbf {S}_{1ij}(\mathbf {E}_A)\) and \(\mathbf {S}_{2ij}(\mathbf {E}_A^2)\).

To start with, we follow Magnus and Neudecker (1988), apply the matrix differentials to \(\mathbf {S}_{ij}\) and obtain

$$\begin{aligned} d\mathbf {S}_{ij} = d\mathbf {P}\mathbf {U}_{iy}\mathbf {P}\mathbf {U}_{jy} + \mathbf {P}\mathbf {U}_{iy}d\mathbf {P}\mathbf {U}_{jy}, \end{aligned}$$
(45)

and

$$\begin{aligned} d^2\mathbf {S}_{ij}= & {} d\{d\mathbf {P}\mathbf {U}_{iy}\mathbf {P}\mathbf {U}_{jy}\} + d\{\mathbf {P}\mathbf {U}_{iy}d\mathbf {P}\mathbf {U}_{jy}\} \nonumber \\= & {} d^2\mathbf {P}\mathbf {U}_{iy}\mathbf {P}\mathbf {U}_{jy} + d\mathbf {P}\mathbf {U}_{iy}d\mathbf {P}\mathbf {U}_{jy} \nonumber \\&+ \, d\mathbf {P}\mathbf {U}_{iy}d\mathbf {P}\mathbf {U}_{jy} + \mathbf {P}\mathbf {U}_{iy}d^2\mathbf {P}\mathbf {U}_{jy}. \end{aligned}$$
(46)

To represent (45) and (46) in terms of \(\mathbf {E}_A\), we will have to find \(d\mathbf {P}\) and \(d^2\mathbf {P}\). Since \(\mathbf {P}=\varvec{\Sigma }_{0y}^{-1}\mathbf {R}\), we have

$$\begin{aligned} d\mathbf {P}=\varvec{\Sigma }_{0y}^{-1}d\mathbf {R}, \end{aligned}$$
(47)

and

$$\begin{aligned} d^2\mathbf {P}=\varvec{\Sigma }_{0y}^{-1}d^2\mathbf {R}. \end{aligned}$$
(48)

We now apply the matrix differentials to (6d) and have

$$\begin{aligned} d\mathbf {R}= & {} -d\mathbf {A}\mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - \mathbf {A}d\mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - \mathbf {A}\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\= & {} -d\mathbf {A}\mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - \mathbf {A}(-\mathbf {N}^{-1}d\mathbf {N}\mathbf {N}^{-1}) \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\nonumber \\&- \mathbf {A}\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\= & {} -d\mathbf {A}\mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} + \mathbf {A}\mathbf {N}^{-1}( d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\mathbf {A} + \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}d\mathbf {A} )\nonumber \\&\times \,\mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - \mathbf {A}\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\= & {} -d\mathbf {A}\mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} + \mathbf {A}\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \mathbf {A}\mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - \mathbf {A}\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\= & {} - \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R}, \end{aligned}$$
(49)

and

$$\begin{aligned} d^2\mathbf {R}= & {} d(d\mathbf {R}) \nonumber \\= & {} -d\{\mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\} - d\{\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R}\} \nonumber \\= & {} - d\mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - \mathbf {R} d\mathbf {A} d\mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\nonumber \\&- \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&- \, d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} - \mathbf {A} d\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R}\nonumber \\&- \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} d\mathbf {R}. \end{aligned}$$
(50)

Substituting \(d\mathbf {R}\) of (49) into (50) and taking \(d\mathbf {N}^{-1}=-\mathbf {N}^{-1}d\mathbf {N}\mathbf {N}^{-1}\) into account, after some lengthy technical derivations, we obtain

$$\begin{aligned} d^2\mathbf {R}= & {} (\mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} + \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R}) d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} (d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {A} +\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} d\mathbf {A} ) \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&- \, \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\&+ \, \mathbf {A} \mathbf {N}^{-1} (d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {A} +\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} d\mathbf {A} ) \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\&+ \, \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} (\mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} + \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R}) \nonumber \\= & {} \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \mathbf {R} d\mathbf {A} \mathbf {N}^{-1}\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&- \, \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\&+ \, \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {A}\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\&+ \, \mathbf {A} \mathbf {N}^{-1}\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} d\mathbf {A}\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\&+ \, \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\= & {} 2 \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, 2 \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {A} \mathbf {N}^{-1} \mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \nonumber \\&- \, \mathbf {R} d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} - d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\&+ \, 2 \mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {A}\mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\&+ \, \mathbf {A} \mathbf {N}^{-1}\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} d\mathbf {A} \mathbf {N}^{-1} d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {R} \nonumber \\= & {} 2 \mathbf {R}d\mathbf {A}\mathbf {N}^{-1}\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} d\mathbf {A}\mathbf {N}^{-1}\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, 2 \mathbf {A}\mathbf {N}^{-1}d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1}\mathbf {R} d\mathbf {A}\mathbf {N}^{-1}\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \nonumber \\&- \, 2 \mathbf {R}d\mathbf {A}\mathbf {N}^{-1}d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1}\mathbf {R} \nonumber \\&+ \, 2 \mathbf {A}\mathbf {N}^{-1}d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {A}\mathbf {N}^{-1}d\mathbf {A}^T \varvec{\Sigma }_{0y}^{-1}\mathbf {R}. \end{aligned}$$
(51)

To complete the derivation of \(\mathbf {S}_{1ij}(\mathbf {E}_A)\), we insert (47) and (49) into (45), replace \(d\mathbf {A}\) with \(\mathbf {E}_A\) and finally obtain the linear approximation of \(\mathbf {S}_{ij}\) in terms of \(\mathbf {E}_A\) as follows:

$$\begin{aligned} \mathbf {S}_{1ij}(\mathbf {E}_A)= & {} - \, \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {E}_A\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{iy}\overline{\mathbf {P}}\mathbf {U}_{jy} \nonumber \\&- \, \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {A}} \overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}}\mathbf {U}_{iy} \overline{\mathbf {P}}\mathbf {U}_{jy} \nonumber \\&- \, \overline{\mathbf {P}}\mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {E}_A \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{jy} \nonumber \\&- \, \overline{\mathbf {P}}\mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {A}} \overline{\mathbf {N}}^{-1} \mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}}\mathbf {U}_{jy}. \end{aligned}$$
(52)

In the similar manner to (52), by inserting (47)–(49) and (51) into (46), and with \(d\mathbf {A}=\mathbf {E}_A\) in mind, we can obtain the second-order approximation of \(\mathbf {S}_{ij}\) in terms of \(\mathbf {E}_A\) as follows:

$$\begin{aligned} \mathbf {S}_{2ij}(\mathbf {E}_A^2)= & {} \frac{1}{2}d^2\mathbf {S}_{ij} \nonumber \\= & {} \frac{1}{2}\varvec{\Sigma }_{0y}^{-1} d^2\mathbf {R}\mathbf {U}_{iy}\mathbf {P}\mathbf {U}_{jy} + \varvec{\Sigma }_{0y}^{-1}d\mathbf {R}\mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} d\mathbf {R}\mathbf {U}_{jy}\nonumber \\&+\, \frac{1}{2}\mathbf {P}\mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1}d^2\mathbf {R}\mathbf {U}_{jy} \nonumber \\= & {} \mathbf {S}_{2ij}^1(\mathbf {E}_A^2) + \mathbf {S}_{2ij}^2(\mathbf {E}_A^2) + \mathbf {S}_{2ij}^3(\mathbf {E}_A^2), \end{aligned}$$
(53)

where \(\mathbf {S}_{2ij}^1(\mathbf {E}_A^2)\), \(\mathbf {S}_{2ij}^2(\mathbf {E}_A^2)\) and \(\mathbf {S}_{2ij}^3(\mathbf {E}_A^2)\) are respectively given by

$$\begin{aligned} \mathbf {S}_{2ij}^1(\mathbf {E}_A^2)= & {} \frac{1}{2} \varvec{\Sigma }_{0y}^{-1} d^2\mathbf {R}\mathbf {U}_{iy}\mathbf {P}\mathbf {U}_{jy} \nonumber \\= & {} \varvec{\Sigma }_{0y}^{-1} \{ \overline{\mathbf {R}} \mathbf {E}_A\overline{\mathbf {N}}^{-1} \overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {E}_A\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} \mathbf {E}_A \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} \nonumber \\&- \, \overline{\mathbf {R}} \mathbf {E}_A\overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \nonumber \\&+ \, \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \}\mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy} \nonumber \\= & {} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G}\varvec{\Sigma }_{0y}^{-1}\mathbf {G} \varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{iy} \overline{\mathbf {P}}\mathbf {U}_{jy} \nonumber \\&+ \, \varvec{\Sigma }_{0y}^{-1} \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G}\varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy} \nonumber \\&- \, \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {H}\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy} \nonumber \\&+ \, \varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy}, \end{aligned}$$
(54a)
$$\begin{aligned} \mathbf {S}_{2ij}^2(\mathbf {E}_A^2)= & {} \varvec{\Sigma }_{0y}^{-1}d\mathbf {R}\mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} d\mathbf {R}\mathbf {U}_{jy} \nonumber \\= & {} \varvec{\Sigma }_{0y}^{-1} (- \overline{\mathbf {R}} \mathbf {E}_A \overline{\mathbf {N}}^{-1} \overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} - \overline{\mathbf {A}} \overline{\mathbf {N}}^{-1} \mathbf {E}_A^T\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} ) \nonumber \\&\times \, \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} (- \overline{\mathbf {R}} \mathbf {E}_A \overline{\mathbf {N}}^{-1} \overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}\nonumber \\&-\,\overline{\mathbf {A}} \overline{\mathbf {N}}^{-1} \mathbf {E}_A^T\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} )\mathbf {U}_{jy} \nonumber \\= & {} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G}\varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}}\mathbf {G} \varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{jy} \nonumber \\&+ \, \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G}\varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{jy} \nonumber \\&+ \, \varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}}\mathbf {G} \varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{jy} \nonumber \\&+ \, \varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} \mathbf {U}_{jy}, \end{aligned}$$
(54b)

and

$$\begin{aligned} \mathbf {S}_{2ij}^3(\mathbf {E}_A^2)= & {} \frac{1}{2} \mathbf {P}\mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} d^2\mathbf {R}\mathbf {U}_{jy} \nonumber \\= & {} \overline{\mathbf {P}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} \{ \overline{\mathbf {R}} \mathbf {E}_A\overline{\mathbf {N}}^{-1} \overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {E}_A\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} \nonumber \\&+ \, \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} \mathbf {E}_A \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} \nonumber \\&- \, \overline{\mathbf {R}} \mathbf {E}_A\overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \nonumber \\&+ \, \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \}\mathbf {U}_{jy} \nonumber \\= & {} \overline{\mathbf {P}}\mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G}\varvec{\Sigma }_{0y}^{-1}\mathbf {G} \varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{jy} \nonumber \\&+ \, \overline{\mathbf {P}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G}\varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{jy} \nonumber \\&- \, \overline{\mathbf {P}} \mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {H}\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{jy} \nonumber \\&+ \, \overline{\mathbf {P}} \mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{jy}. \end{aligned}$$
(54c)

1.2 Appendix B: the expectation of \(\mathbf {S}_2(\mathbf {E}_A^2)\)

To compute the expectation of the matrix \(\mathbf {S}_2(\mathbf {E}_A^2)\) in (25), we will need to find the expectation of each element of \(\mathbf {S}_2(\mathbf {E}_A^2)\), namely, \(s_{2ij}(\mathbf {E}_A^2)\) in (24d). Applying the expectation operator to (24d), we have

$$\begin{aligned} E\{s_{2ij}(\mathbf {E}_A^2)\}= & {} E\{ \text {tr}[ \mathbf {S}_{2ij}(\mathbf {E}_A^2) ] \} \nonumber \\= & {} E\{ \text {tr}[ \mathbf {S}_{2ij}^1(\mathbf {E}_A^2) ] \} + E\{ \text {tr}[ \mathbf {S}_{2ij}^2(\mathbf {E}_A^2) ] \}\nonumber \\&+ \,E\{ \text {tr}[ \mathbf {S}_{2ij}^3(\mathbf {E}_A^2) ] \} \nonumber \\= & {} s_{2ij}^1(\varvec{\Sigma }_a) + s_{2ij}^2(\varvec{\Sigma }_a) + s_{2ij}^3(\varvec{\Sigma }_a), \end{aligned}$$
(55)

where

$$\begin{aligned} s_{2ij}^1(\varvec{\Sigma }_a)&= \text {tr}[ E\{ \mathbf {S}_{2ij}^1(\mathbf {E}_A^2) \} ],\end{aligned}$$
(56a)
$$\begin{aligned} s_{2ij}^2(\varvec{\Sigma }_a)&= \text {tr}[ E\{ \mathbf {S}_{2ij}^2(\mathbf {E}_A^2)\} ], \end{aligned}$$
(56b)

and

$$\begin{aligned} s_{2ij}^3(\varvec{\Sigma }_a) = \text {tr}[ E\{ \mathbf {S}_{2ij}^3(\mathbf {E}_A^2) \} ]. \end{aligned}$$
(56c)

Before working out all the three terms \(s_{2ij}^1(\varvec{\Sigma }_a)\), \(s_{2ij}^2(\varvec{\Sigma }_a)\) and \(s_{2ij}^3(\varvec{\Sigma }_a)\) in (56a)–(56c), respectively, let us first state four basic formulae in the following. Given four quadratic forms \(\mathbf {E}_A\mathbf {M}_1\mathbf {E}_A\), \(\mathbf {E}_A\mathbf {M}_2\mathbf {E}_A^T\), \(\mathbf {E}_A^T\mathbf {M}_3\mathbf {E}_A\), and \(\mathbf {E}_A^T\mathbf {M}_4\mathbf {E}_A^T\), we have

$$\begin{aligned} E\{ \mathbf {E}_A\mathbf {M}_1\mathbf {E}_A \}= & {} [ E\{ \mathbf {E}_A^{ri}\mathbf {M}_1\mathbf {E}_A^{cj} \} ] \nonumber \\= & {} [ \text {tr}\{ \mathbf {M}_1E(\mathbf {E}_A^{cj}\mathbf {E}_A^{ri}) \} ] \nonumber \\= & {} [ \text {tr}\{ \mathbf {M}_1 \mathbf {C}_{ji}^{cr} \} ], \end{aligned}$$
(57a)

which is an \((n\times m)\) matrix, where \(\mathbf {C}_{ji}^{cr}\) is the covariance matrix between the jth column vector \(\mathbf {E}_A^{cj}\) and the ith row vector \(\mathbf {E}_A^{ri}\) of \(\mathbf {E}_A\), namely \(\mathbf {C}_{ji}^{cr}=\text {cov}(\mathbf {E}_A^{cj},\, \mathbf {E}_A^{ri}).\)

In the similar manner to (57a), we have

$$\begin{aligned} E\{ \mathbf {E}_A\mathbf {M}_2\mathbf {E}_A^T \}= & {} [ \text {tr}\{ \mathbf {M}_2E((\mathbf {E}_A^{rj})^T\mathbf {E}_A^{ri} )\} ] \nonumber \\= & {} [ \text {tr}\{ \mathbf {M}_2\mathbf {C}_{ji}^{rr} \} ], \end{aligned}$$
(57b)
$$\begin{aligned} E\{ \mathbf {E}_A^T\mathbf {M}_3\mathbf {E}_A \}= & {} [ \text {tr}\{ \mathbf {M}_3 E(\mathbf {E}_A^{cj}(\mathbf {E}_A^{ci})^T) \} ] \nonumber \\= & {} [ \text {tr}\{ \mathbf {M}_3\mathbf {C}_{ji}^{cc} \} ], \end{aligned}$$
(57c)

and

$$\begin{aligned} E\{ \mathbf {E}_A^T\mathbf {M}_4\mathbf {E}_A^T \}= & {} [ \text {tr}\{ \mathbf {M}_4E(\mathbf {E}_A^{rj}(\mathbf {E}_A^{ci})^T) \} ] \nonumber \\= & {} [ \text {tr}\{ \mathbf {M}_4\mathbf {C}_{ji}^{rc} \} ], \end{aligned}$$
(57d)

where \(\mathbf {C}_{ji}^{rr}\) stands for the covariance matrix between the jth row vector \(\mathbf {E}_A^{rj}\) and the ith row vector \(\mathbf {E}_A^{ri}\) of \(\mathbf {E}_A\), \(\mathbf {C}_{ji}^{cc}\) for the covariance matrix between the jth column vector \(\mathbf {E}_A^{cj}\) and the ith column vector \(\mathbf {E}_A^{ci}\) of \(\mathbf {E}_A\), and \(\mathbf {C}_{ji}^{rc}\) for the covariance matrix between the jth row vector \(\mathbf {E}_A^{rj}\) and the ith column vector \(\mathbf {E}_A^{ci}\) of \(\mathbf {E}_A\).

With the four formulae (57) in hand, we can now compute the three expectations \(s_{2ij}^1(\varvec{\Sigma }_a)\), \(s_{2ij}^2(\varvec{\Sigma }_a)\) and \(s_{2ij}^3(\varvec{\Sigma }_a)\) of (55). Inserting (54a) into (56a), we have

$$\begin{aligned} s_{2ij}^1(\varvec{\Sigma }_a)= & {} \text {tr}\{ E[ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G}\varvec{\Sigma }_{0y}^{-1}\mathbf {G} \varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{iy} \overline{\mathbf {P}}\mathbf {U}_{jy}] \} \nonumber \\&+ \, \text {tr}\{ E[ \varvec{\Sigma }_{0y}^{-1} \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G}\varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy}] \} \nonumber \\&- \, \text {tr}\{ E[ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {H}\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy}] \} \nonumber \\&+ \, \text {tr}\{ E[ \varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy} ] \}\nonumber \\= & {} \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {Q}_1 \varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{iy} \overline{\mathbf {P}}\mathbf {U}_{jy} \}\nonumber \\&+ \text {tr}\{ \varvec{\Sigma }_{0y}^{-1} \mathbf {Q}_2 \varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{iy} \overline{\mathbf {P}}\mathbf {U}_{jy} \} \nonumber \\&- \, \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}}\mathbf {Q}_H \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy} \}\nonumber \\&+ \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\mathbf {Q}_3\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\overline{\mathbf {P}} \mathbf {U}_{jy} \}, \end{aligned}$$
(58a)

where

$$\begin{aligned} \mathbf {Q}_1= & {} E\{ \mathbf {G}\varvec{\Sigma }_{0y}^{-1}\mathbf {G} \} \nonumber \\= & {} E\{ \mathbf {E}_A\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}\mathbf {E}_A \} \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \nonumber \\= & {} \mathbf {K}_1 \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T, \end{aligned}$$
(58b)
$$\begin{aligned} \mathbf {K}_1 = [ \text {tr}\{ \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}\mathbf {C}_{ji}^{cr} \} ], \end{aligned}$$
(58c)
$$\begin{aligned} \mathbf {Q}_2= & {} E\{ \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {G} \} \nonumber \\= & {} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1} E\{ \mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {E}_A \} \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \nonumber \\= & {} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1} \mathbf {K}_2 \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T, \end{aligned}$$
(58d)
$$\begin{aligned} \mathbf {K}_2 = [ \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}}\mathbf {C}_{ji}^{cc} \} ], \end{aligned}$$
(58e)
$$\begin{aligned} \mathbf {Q}_H = E\{ \mathbf {E}_A\overline{\mathbf {N}}^{-1} \mathbf {E}_A^T \} = [ \text {tr}\{ \overline{\mathbf {N}}^{-1} \mathbf {C}_{ji}^{rr} \} ], \end{aligned}$$
(58f)
$$\begin{aligned} \mathbf {Q}_3= & {} E\{ \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {G}^T \} \nonumber \\= & {} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1} E\{ \mathbf {E}_A^T \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {A}}\overline{\mathbf {N}}^{-1} \mathbf {E}_A^T \} \nonumber \\= & {} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1} \mathbf {K}_3, \end{aligned}$$
(58g)
$$\begin{aligned} \mathbf {K}_3 = [ \text {tr}\{\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {A}} \overline{\mathbf {N}}^{-1}\mathbf {C}_{ji}^{rc} \} ]. \end{aligned}$$
(58h)

In the similar manner to the derivation of (58), we can readily find \(s_{2ij}^2(\varvec{\Sigma }_a)\), which is given as follows:

$$\begin{aligned} s_{2ij}^2(\varvec{\Sigma }_a)= & {} \text {tr}\{ E[\mathbf {S}_{2ij}^2(\mathbf {E}_A^2)] \} \nonumber \\= & {} \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {Q}_{4i} \varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{jy} \} + \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {Q}_{5i}\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} \mathbf {U}_{jy} \} \nonumber \\&+ \, \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\mathbf {Q}_{6i}\varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{jy}\} + \text {tr}\{ \varvec{\Sigma }_{0y}^{-1} \mathbf {Q}_{7i}\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} \mathbf {U}_{jy} \},\nonumber \\ \end{aligned}$$
(59a)

where

$$\begin{aligned} \mathbf {Q}_{4i}= & {} E\{ \mathbf {G}\varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}}\mathbf {G} \} \nonumber \\= & {} \mathbf {K}_{4i} \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T, \end{aligned}$$
(59b)
$$\begin{aligned} \mathbf {K}_{4i} = [ \text {tr}\{\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}}\mathbf {C}_{kl}^{cr} \} ], \end{aligned}$$
(59c)
$$\begin{aligned} \mathbf {Q}_{5i}= & {} E\{ \mathbf {G}\varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T \} \nonumber \\= & {} \mathbf {K}_{5i}, \end{aligned}$$
(59d)
$$\begin{aligned} \mathbf {K}_{5i} = [ \text {tr}\{ \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {A}}\overline{\mathbf {N}}^{-1} \mathbf {C}_{kl}^{rr} \} ], \end{aligned}$$
(59e)
$$\begin{aligned} \mathbf {Q}_{6i}= & {} E\{ \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}}\mathbf {G} \} \nonumber \\= & {} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\mathbf {K}_{6i} \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T, \end{aligned}$$
(59f)
$$\begin{aligned} \mathbf {K}_{6i} = [ \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} \mathbf {C}_{kl}^{cc} \} ], \end{aligned}$$
(59g)
$$\begin{aligned} \mathbf {Q}_{7i}= & {} E\{ \mathbf {G}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1}\mathbf {G}^T \} \nonumber \\= & {} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\mathbf {K}_{7i}, \end{aligned}$$
(59h)
$$\begin{aligned} \mathbf {K}_{7i} = [ \text {tr}\{ \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {A}}\overline{\mathbf {N}}^{-1} \mathbf {C}_{kl}^{rc} \} ]. \end{aligned}$$
(59i)

In the case of \(s_{2ij}^3(\varvec{\Sigma }_a)\), we have

$$\begin{aligned} s_{2ij}^3(\varvec{\Sigma }_a)= & {} \text {tr}\{ \overline{\mathbf {P}}\mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {Q}_1\varvec{\Sigma }_{0y}^{-1}\mathbf {U}_{jy} \}\nonumber \\&+\, \text {tr}\{ \overline{\mathbf {P}} \mathbf {U}_{iy}\varvec{\Sigma }_{0y}^{-1} \mathbf {Q}_2\varvec{\Sigma }_{0y}^{-1} \mathbf {U}_{jy} \} \nonumber \\&- \, \text {tr}\{\overline{\mathbf {P}} \mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}}\mathbf {Q}_H \varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{jy} \}\nonumber \\&+\, \text {tr}\{ \overline{\mathbf {P}} \mathbf {U}_{iy} \varvec{\Sigma }_{0y}^{-1} \mathbf {Q}_3\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \mathbf {U}_{jy} \}. \end{aligned}$$
(60)

1.3 Appendix C: the second-order approximation of \(\hat{\varvec{\beta }}\)

If we apply the method of Taylor’s expansion to the vector \(\hat{\varvec{\beta }}\) of (9) up to the second-order approximation, we have

$$\begin{aligned} \hat{\varvec{\beta }} = \varvec{\beta } + d\hat{\varvec{\beta }} + \frac{1}{2}d^2\hat{\varvec{\beta }}. \end{aligned}$$
(61)

The linear term \(d\hat{\varvec{\beta }}\) can be derived by applying the matrix differential to (9), which is given as follows:

$$\begin{aligned} d\hat{\varvec{\beta }}= & {} d\mathbf {N}^{-1}\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\mathbf {y} + \mathbf {N}^{-1}d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\mathbf {y} + \mathbf {N}^{-1}\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}d\mathbf {y} \nonumber \\= & {} - \mathbf {N}^{-1}d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {A}\mathbf {N}^{-1}\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {y}\nonumber \\&- \mathbf {N}^{-1}\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1} d\mathbf {A}\mathbf {N}^{-1}\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\mathbf {y} \nonumber \\&+ \, \mathbf {N}^{-1}d\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}\mathbf {y} + \mathbf {N}^{-1}\mathbf {A}^T\varvec{\Sigma }_{0y}^{-1}d\mathbf {y}. \end{aligned}$$
(62)

By definition, the second-order term \(d^2\hat{\varvec{\beta }}\) can be obtained by applying the matrix differential again to \(d\hat{\varvec{\beta }}\) of (61), namely

$$\begin{aligned} d^2\hat{\varvec{\beta }}= & {} d(d\hat{\varvec{\beta }}) \\= & {} -d{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&- {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}d{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \\&- {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \\&- \, d{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}}\\&- {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}d{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&- {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&+ \, d{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} + {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {y}} \\&+ \, d{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}} + {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {y}} \\&- {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1} {\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}}\\&- {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {y}} \end{aligned}$$
$$\begin{aligned}= & {} {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&+ \, {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&+ \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&+ \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \\&+ \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \\&+ \, {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&+ \, {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&+ \, {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}d{\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \\&+ \, 2 {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {y}} \\&- \, {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}} \\&- {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}} \\&- {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1} {\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}}\\&- {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {y}} \end{aligned}$$
$$\begin{aligned}= & {} 2 {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \nonumber \\&+ \, 2 {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \nonumber \\&+ \, 2{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \nonumber \\&+ \, 2{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \nonumber \\&- \, 2{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1}{\mathbf {y}}\nonumber \\&- 2 {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \nonumber \\&- \, 2 {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} + 2 {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {y}} \nonumber \\&- \, 2 {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}}\nonumber \\&- 2 {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}} \nonumber \\= & {} - 2 {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T {\mathbf {R}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \nonumber \\&- \, 2 {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {A}}{\mathbf {N}}^{-1}d{\mathbf {A}}^T {\mathbf {R}}^T\varvec{\Sigma }_{0y}^{-1}{\mathbf {y}} \nonumber \\&- \, 2{\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {R}} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \nonumber \\&+ \, 2{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}}{\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {y}} \nonumber \\&+ \, 2 {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}d{\mathbf {y}} - 2 {\mathbf {N}}^{-1}d{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} {\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}} \nonumber \\&- \, 2 {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {A}} {\mathbf {N}}^{-1}{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} d{\mathbf {y}}. \end{aligned}$$
(63)

To finally represent the weighted LS estimate \(\hat{\varvec{\beta }}\) in terms of \(\mathbf {E}_A\) and \(\varvec{\epsilon }\) up to the second-order approximation in the sense of Taylor’s expansion at the point of \((\overline{\mathbf {A}}, \varvec{\beta })\), we only need to set \(d\mathbf {A}=\mathbf {E}_A\), \(d\mathbf {y}=\varvec{\epsilon }\) and insert the corresponding true values of quantities in the Taylor’s expansion. As a result, we can rewrite \(\hat{\varvec{\beta }}\) of (61) as follows:

$$\begin{aligned} \hat{\varvec{\beta }} = \varvec{\beta } + \hat{\varvec{\beta }}_1(\mathbf {E}_A, \varvec{\epsilon }) + \hat{\varvec{\beta }}_2(\mathbf {E}_A, \varvec{\epsilon }), \end{aligned}$$
(64a)

where

$$\begin{aligned} \hat{\varvec{\beta }}_1(\mathbf {E}_A, \varvec{\epsilon })= & {} d\hat{\varvec{\beta }}(\mathbf {E}_A, \varvec{\epsilon }) \nonumber \\= & {} - \overline{\mathbf {N}}^{-1}\mathbf {E}_A^T\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {A}}\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {A}}\varvec{\beta }\nonumber \\&-\,\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {E}_A\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {A}}\varvec{\beta } \nonumber \\&+ \, \overline{\mathbf {N}}^{-1}\mathbf {E}_A^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {A}}\varvec{\beta } + \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1}\varvec{\epsilon } \nonumber \\= & {} \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} ( \varvec{\epsilon } - \mathbf {E}_A\varvec{\beta }), \end{aligned}$$
(64b)

and, with \(\overline{\mathbf {R}}^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {A}}=\mathbf {0}\) in mind, the second-order term \(\hat{\varvec{\beta }}_2(\mathbf {E}_A, \varvec{\epsilon })\) becomes:

$$\begin{aligned} \hat{\varvec{\beta }}_2(\mathbf {E}_A, \varvec{\epsilon })= & {} d^2\hat{\varvec{\beta }}(\mathbf {E}_A, \varvec{\epsilon }) / 2 \nonumber \\= & {} - \, \overline{\mathbf {N}}^{-1}\mathbf {E}_A^T\varvec{\Sigma }_{0y}^{-1} \overline{\mathbf {R}} \mathbf {E}_A\varvec{\beta } \nonumber \\&+ \, \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {E}_A\overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T \varvec{\Sigma }_{0y}^{-1} \mathbf {E}_A \varvec{\beta } \nonumber \\&+ \, \overline{\mathbf {N}}^{-1}\mathbf {E}_A^T\varvec{\Sigma }_{0y}^{-1}\overline{\mathbf {R}} \varvec{\epsilon } \nonumber \\&- \, \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} \mathbf {E}_A \overline{\mathbf {N}}^{-1}\overline{\mathbf {A}}^T\varvec{\Sigma }_{0y}^{-1} \varvec{\epsilon }. \end{aligned}$$
(64c)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, P. The effect of errors-in-variables on variance component estimation. J Geod 90, 681–701 (2016). https://doi.org/10.1007/s00190-016-0902-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00190-016-0902-0

Keywords

Navigation