Skip to main content
Log in

The hypothesis testing statistics in linear ill-posed models

  • Original Article
  • Published:
Journal of Geodesy Aims and scope Submit manuscript

Abstract

In the geodetic community, an adjustment framework is established by the four components of model choice, parameter estimation, variance component estimation (VCE), and quality control. For linear ill-posed models, the parameter estimation and VCE have been extensively investigated. However, at least to the best of our knowledge, nothing is known about the quality control of hypothesis testing in ill-posed models although it is indispensable and rather important. In this paper, we extend the theory of hypothesis testing to ill-posed models. As the Tikhonov regularization is typically applied to stabilize the solution of ill-posed models, the solution and its associated residuals are biased. We first derive the statistics of overall-test, w-test and minimal detectable bias for an ill-posed model. Due to the bias influence, both overall-test and w-test statistics do not follow the distributions as those used in well-posed models. Then, we develop the bias-corrected statistics. As a result, the bias-corrected w-test statistic can sufficiently be approximated by a standard normal distribution; while the bias-corrected overall-test statistic can be approximated by two non-central chi-square distributions, respectively. Finally, some numerical experiments with a Fredholm first kind integral equation are carried out to demonstrate the performance of our designed hypothesis testing statistics in an ill-posed model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  • Amiri-Simkooei A (2007) Least squares variance component estimation: theory and GPS applications. PhD dissertation, Delft Univ. Technol., Delft, The Netherlands

  • Amiri-Simkooei A, Teunissen PJG, Tiberius C (2009) Application of least-squares variance component estimation to GPS observables. J Surv Eng 135(4):149–160

    Google Scholar 

  • Arnold S (1981) The theory of linear models and multivariate analysis. Wiley, New York

    Google Scholar 

  • Baarda W (1967) Statistical concepts in geodesy. Netherlands Geodetic Commission, Publ. on geodesy, New series, vol 2, no 4, Delft, The Netherlands

  • Baarda W (1968) A testing procedure for use in geodetic networks. Netherlands Geodetic Commission, Publ. on geodesy, New Series, vol 2, no 5, Delft, The Netherlands

  • Calvetti D, Morigi S, Reichel L, Sgallari F (2000) Tikhonov regularization and the L-curve for large discrete ill-posed problems J. Comput Appl Math 123(1–2):423–446

    Google Scholar 

  • Golub G, Heath M, Wahba G (1979) Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21(2):215–223

    Google Scholar 

  • Grafarend E, Schaffrin B (1993) Adjustment computations in linear models (in German). Bibliographical Institute, Mannheim

    Google Scholar 

  • Hansen P (1990) Truncated singular value decomposition solutions to discrete ill-posed problems with ill-determined numerical rank. SIAM J Sci Comput 11(3):503–518

    Google Scholar 

  • Hansen P, O’Leary D (1993) The use of the L-curve in the regularization of discrete ILL-posed problems. SIAM J Sci Comput 14(6):1487–1503

    Google Scholar 

  • Helmert F (1907) Die Ausgleichungsrechnung nach der Methode der kleinsten quadrate, 2nd edn. Teubner, Leipzig

    Google Scholar 

  • Hoerl A, Kennard R (1970a) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12(1):55–67

    Google Scholar 

  • Hoerl A, Kennard R (1970b) Ridge regression: application to nonorthogonal problems. Technometrics 12(1):69–82

    Google Scholar 

  • Kent J, Mohammadzadeh M (2000) Global optimization of the generalized cross-validation criterion. Stat Comput 10(3):231–236

    Google Scholar 

  • Koch K (1978) Schätzung von Varianzkomponenten. Allg Ver-mess Nachr 85:264–269

    Google Scholar 

  • Koch K (1986) Maximum likelihood estimate of variance components. J Geod 60(4):329–338

    Google Scholar 

  • Koch K (1999) Parameter estimation and hypothesis testing in linear models, 2nd edn. Springer, Berlin

    Google Scholar 

  • Koch K (2007) Introduction to Bayesian Statistics, 2nd edn. Springer, Berlin

    Google Scholar 

  • Koch K, Kusche J (2002) Regularization of geopotential determination from satellite data by variance components. J Geod 76(5):259–268

    Google Scholar 

  • Kubik K (1970) The estimation of the weights of measured quantities within the method of least squares. Bull Geod 95(1):21–40

    Google Scholar 

  • Kusche J (2003) Noise variance estimation and optimal weight determination for GOCE gravity recovery. Adv Geosci Eur Geosci Union 1:81–85

    Google Scholar 

  • Leick A (1980) Adjustment computations. Lecture Notes in Surveying Engineering. University of Maine, Orono

    Google Scholar 

  • Li B (2016) Stochastic modeling of triple-frequency BeiDou signals: estimation, assessment and impact analysis. J Geod 90(7):593–610

    Google Scholar 

  • Li B, Shen Y, Xu P (2008) Assessment of stochastic models for GPS measurements with different types of receivers. Chin Sci Bull 53(20):3219–3225

    Google Scholar 

  • Li B, Shen Y, Lou L (2011) Efficient estimation of variance and covariance components: A case study for GPS stochastic model evaluation. IEEE Trans Geosci Remote Sens 49(1):203–210

    Google Scholar 

  • Li B, Lou L, Shen Y (2016) GNSS elevation-dependent stochastic modeling and its impacts on the statistic testing. J Surv Eng 142(2):04015012

    Google Scholar 

  • Li B, Zhang L, Verhagen S (2017) Impacts of BeiDou stochastic model on reliability: overall test, w-test and minimal detectable bias. GPS Solut 21(3):1095–1112

    Google Scholar 

  • Mathai A, Provost S (1992) Quadratic forms in random variables: theory and applications. Marcel Dekker, New York

    Google Scholar 

  • Pope A (1975) The statistics of residuals and the detection of outliers, presented at the XVI General Assembly of IUGG, Grenoble, France, 1975, reprinted as NOAA Tech. Report Nos. 65 NGS 1, 1976.

  • Pukelsheim F (1976) Estimating variance components in linear models. J Multivar Anal 6(4):626–629

    Google Scholar 

  • Rao C (1971a) Estimation of variance and covariance components–MINQUE theory. J Multivar Anal 1(3):257–275

    Google Scholar 

  • Rao C (1971b) Minimum variance quadratic unbiased estimation of variance components. J Multivar Anal 1(4):445–456

    Google Scholar 

  • Rao C, Toutenburg H (1995) Linear models: least-squares and alternatives. Springer, Heidelberg

    Google Scholar 

  • Schaffrin B (1983) Estimation of variance-covariance components for heterogenous replicated measurements (in German), German Geodetic Community, Publ. C-282, Munich

  • Schaffrin B (2008a) On penalized least squares: its mean squared error and a quasi-optimal weight ratio. In: Recent advances in linear models and related areas. Physica-Verlag HD, pp 313–322

  • Schaffrin B (2008b) Minimum mean squared error (MSE) adjustment and the optimal Tikhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE). J Geod 82(2):113–121

    Google Scholar 

  • Schwartz J (2012) Introduction to matrices and vectors. Series: Dover books on mathematics (republished). Dover Publications Incorporation, Mineola

  • Shen Y, Xu P, Li B (2012) Bias-corrected regularized solution to inverse ill-posed models. J Geod 86:597–608

    Google Scholar 

  • Strang G, Borre K (1997) Linear Algebra, Geodesy, and GPS. Wellesley-Cambridge Press, Wellesley

    Google Scholar 

  • Teunissen PJG (1990) An integrity and quality control procedure for use in multi sensor integration. ION GPS (republished in ION Red Book Series, vol. 7, 2010)

  • Teunissen PJG (1998a) Minimal detectable biases of GPS data. J Geod 72(4):236–244

    Google Scholar 

  • Teunissen PJG (1998b) Quality control and GPS. Chap. 7 in GPS for Geodesy, 2nd edn., pp 187–229

  • Teunissen PJG (2003) Adjustment theory: an introduction, 2nd edn. Delft University Press, Delft

    Google Scholar 

  • Teunissen PJG (2006) Testing theory: an introduction, 2nd edn. Delft University Press, Delft

    Google Scholar 

  • Teunissen PJG (2018) Distributional theory for the DIA method. J Geod 92:59–80

    Google Scholar 

  • Teunissen PJG, Amiri-Simkooei A (2008) Least-squares variance component estimation. J Geod 82(2):65–82

    Google Scholar 

  • Teunissen PJG, Salzmann M (1989) A recursive slippage test for use in state-space filtering. Manus Geod 14(6):383–390

    Google Scholar 

  • Teunissen PJG, Simons D, Tiberius C (2005) Probability and observation theory. Lecture Notes Delft University of Technology, Delft

    Google Scholar 

  • Tiberius C, Kenselaar F (2003) Variance component estimation and precise GPS positioning: case study. J Surv Eng 129(1):11–18

    Google Scholar 

  • Tienstra J (1956) Theory of the adjustment of normally distributed observations. Argus, Amsterdam

    Google Scholar 

  • Tikhonov A (1963) Solution of incorrectly formulated problems and the regularization method. Soviet Math Dokl 4:1035–1038

    Google Scholar 

  • Tikhonov A, Arsenin V (1977) Solutions of ill-posed problems. Wiley, New York

    Google Scholar 

  • Tikhonov A, Goncharsky V, Steppanov V, Yagola A (1995) Numerical methods for the solution of ill-posed problems. Kluwer Academic Publishers, Dordrecht

    Google Scholar 

  • Wang J, Stewart M, Sakiri M (1998) Stochastic modeling for static GPS baseline data processing. J Surv Eng 124(4):171–181

    Google Scholar 

  • Wang J, Satirapod C, Rizos C (2002) Stochastic assessment of GPS carrier phase measurements for precise static relative positioning. J Geod 76(2):95–104

    Google Scholar 

  • Wu Z, Bian S, Xiang C, Tong Y (2013) A new method for TSVD regularization truncated parameter selection. Math Probl Eng 2013:161834. https://doi.org/10.1155/2013/161834

    Article  Google Scholar 

  • Xu P (1992) Determination of surface gravity anomalies using gradiometric observables. Geophys J Int 110(2):321–332

    Google Scholar 

  • Xu P (1998) Truncated SVD methods for linear discrete ill-posed problems. Geophys J Int 135(2):505–514

    Google Scholar 

  • Xu P (2009) Iterative generalized cross-validation for fusing heteroscedastic data of inverse ill-posed problems. Geophys J Int 179(1):182–200

    Google Scholar 

  • Xu P, Rummel R (1994a) A simulation study of smoothness methods in recovery of regional gravity fields. Geophys J Int 117(2):472–486

    Google Scholar 

  • Xu P, Rummel R (1994b) Generalized ridge regression with applications in determination of potential fields. Manus Geod 20(1):8–20

    Google Scholar 

  • Xu P, Shen Y, Fukuda Y, Liu Y (2006) Variance component estimation in linear inverse ill-posed models. J Geod 80(2):69–81

    Google Scholar 

  • Xu P, Liu Y, Shen Y, Fukuda Y (2007) Estimability analysis of variance and covariance components. J Geod 81:593–602

    Google Scholar 

  • Yang Y, Xu T, Song L (2005) Robust estimation of variance components with application in global positioning system network adjustment. J Surv Eng 131(4):107–112

    Google Scholar 

  • Yang Y, Zeng A, Zhang J (2009) Adaptive collocation with application in height system transformation. J Geod 83(5):403–410

    Google Scholar 

  • Yu Z (1996) A universal formula of maximum likelihood estimation of variance-covariance components. J Geod 70(4):233–240

    Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Funds of China (41874030, 42074026), the Program of Shanghai Academic Research Leader (20XD1423800) and National Key Research and Development Program of China (2016YFB0501802), and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Contributions

BL proposed this study and developed the theory. MW conducted the numerical computation and analysis. BL and MW wrote the manuscript and YS reviewed and commented on the manuscript. All authors were involved in discussions throughout the development.

Corresponding author

Correspondence to Bofeng Li.

Appendices

Appendix A: The derivation of (23)

We start with

$$\mathrm{E}\left({\mathbf{v}}_{\kappa }^{\mathrm{T}}\mathbf{P}{\mathbf{v}}_{\kappa }\right)=\mathrm{Tr}\left(\mathrm{E}\left(\mathbf{P}{\mathbf{v}}_{\kappa }{\mathbf{v}}_{\kappa }^{\mathrm{T}}\right)\right)$$
(A1)

Inserting \({\mathbf{v}}_{\kappa }\) of (7a) into (A1) yields

$$\begin{array}{c}\mathrm{E}\left({\mathbf{v}}_{\kappa }^{\mathrm{T}}\mathbf{P}{\mathbf{v}}_{\kappa }\right)={\kappa }^{2}\mathrm{Tr}({\mathbf{N}}_{\kappa }^{-1}\mathbf{S}\mathbf{x}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N})+\mathrm{Tr}(\mathbf{P}{\mathbf{R}}_{\kappa }\mathrm{E}\left({\varvec{\upvarepsilon}}{{\varvec{\upvarepsilon}}}^{\mathrm{T}}\right){\mathbf{R}}_{\kappa }^{\mathrm{T}})\\ ={\kappa }^{2}\mathrm{Tr}({\mathbf{Q}}_{\kappa s}\mathbf{x}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N})+{\sigma }_{0}^{2}\mathrm{Tr}({\mathbf{R}}_{\kappa }^{\mathrm{T}}{\mathbf{R}}_{\kappa }^{\mathrm{T}})\end{array}$$
(A2)

Inserting \({\mathbf{R}}_{\kappa }={\mathbf{I}}_{m}-\mathbf{A}{\mathbf{N}}_{\kappa }^{-1}{\mathbf{A}}^{\mathrm{T}}\mathbf{P}\) into (A2) yields

$$\mathrm{Tr}({\mathbf{R}}_{\kappa }^{\mathrm{T}}{\mathbf{R}}_{\kappa }^{\mathrm{T}})=\mathrm{Tr}({\mathbf{I}}_{m}-2{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}+{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N})$$
(A3)

Further inserting \({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}={\mathbf{I}}_{n}-\kappa {\mathbf{Q}}_{\kappa s}\) into (A3) obtains

$$\mathrm{Tr}({\mathbf{R}}_{\kappa }^{\mathrm{T}}{\mathbf{R}}_{\kappa }^{\mathrm{T}})=q+{\kappa }^{2}\mathrm{Tr}({\mathbf{Q}}_{\kappa s}^{2})$$
(A4)

Substituting \({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}={\mathbf{I}}_{n}-\kappa {\mathbf{Q}}_{\kappa s}\) and (A4) into (A2), it follows

$$\mathrm{E}\left(\frac{{\mathbf{v}}_{\kappa }^{\mathrm{T}}\mathbf{P}{\mathbf{v}}_{\kappa }}{{\sigma }_{0}^{2}}\right)=q+ {\kappa }^{2}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{2}\right\}+\frac{{\kappa }^{2}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{Q}}_{\kappa s}\mathbf{x}}{{\sigma }_{0}^{2}}-\frac{{\kappa }^{3}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{Q}}_{\kappa s}^{2}\mathbf{x}}{{\sigma }_{0}^{2}}$$
(A5)

Appendix B: The derivation of (34)

The non-centrality parameter follows

$$\begin{array}{c}{\lambda }_{1}^{2}=\frac{{\mathbf{x}}^{\mathrm{T}}{\mathbf{A}}^{\mathrm{T}}{\mathbf{Q}}_{\mathbf{y}\mathbf{y}}^{-1/2}{{\varvec{\Pi}}}_{1}{\mathbf{Q}}_{\mathbf{y}\mathbf{y}}^{-1/2}\mathbf{A}\mathbf{x}}{{\sigma }_{0}^{2}}=\frac{{\mathbf{x}}^{\mathrm{T}}{\mathbf{A}}^{\mathrm{T}}{\mathbf{R}}_{\kappa }^{\mathrm{T}}{\mathbf{R}}_{\kappa }^{\mathrm{T}}\mathbf{P}{\mathbf{R}}_{\kappa }{\mathbf{R}}_{\kappa }\mathbf{A}\mathbf{x}}{{\sigma }_{0}^{2}}\\ =\frac{{\kappa }^{2}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{N}}_{\kappa }^{-1}{\mathbf{A}}^{\mathrm{T}}\mathbf{P}{\mathbf{R}}_{\kappa }{\mathbf{R}}_{\kappa }\mathbf{A}{\mathbf{N}}_{\kappa }^{-1}\mathbf{S}\mathbf{x}}{{\sigma }_{0}^{2}}\end{array}$$
(B1)

Substituting \({\mathbf{R}}_{\kappa }\mathbf{A}=\kappa \mathbf{A}{\mathbf{N}}_{\kappa }^{-1}\mathbf{S}\) and \({\mathbf{R}}_{\kappa }={\mathbf{I}}_{m}-\mathbf{A}{\mathbf{N}}_{\kappa }^{-1}{\mathbf{A}}^{\mathrm{T}}\mathbf{P}\) into (B1) obtains

$${\lambda }_{1}^{2}=\frac{{\kappa }^{2}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{N}}_{\kappa }^{-1}(\mathbf{N}-2\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}+\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}){\mathbf{N}}_{\kappa }^{-1}\mathbf{S}\mathbf{x}}{{\sigma }_{0}^{2}}$$
(B2)

Inserting \({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}={\mathbf{I}}_{n}-\kappa {\mathbf{Q}}_{\kappa s}\) into (B2) yields

$${\lambda }_{1}^{2}=\frac{{\kappa }^{4}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{Q}}_{\kappa s}^{3}\mathbf{x}}{{\sigma }_{0}^{2}}-\frac{{\kappa }^{5}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{Q}}_{\kappa s}^{4}\mathbf{x}}{{\sigma }_{0}^{2}}$$
(B3)

Appendix C: The derivation of (35)

The expectation of quadratic form of bias-corrected residuals \({\mathbf{v}}_{\kappa ,1}\) with respect to weight matrix \(\mathbf{P}\) is \(\Omega =\mathrm{E}\left({\mathbf{v}}_{\kappa ,1}^{\mathrm{T}}\mathbf{P}{\mathbf{v}}_{\kappa ,1}\right)=\mathrm{Tr}\left\{\mathbf{P}\mathrm{E}\left({\mathbf{v}}_{\kappa ,1}{\mathbf{v}}_{\kappa ,1}^{\mathrm{T}}\right)\right\}\). Inserting (32a) yields

$$\begin{array}{c}\Omega ={\kappa }^{2}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}\mathrm{E}\left(\delta {\widehat{\mathbf{x}}}_{\kappa }\delta {\widehat{\mathbf{x}}}_{\kappa }^{\mathrm{T}}\right){\mathbf{Q}}_{\kappa s}^{\mathrm{T}}\mathbf{N}\right\}+{\sigma }_{0}^{2}\mathrm{Tr}\left\{{\mathbf{R}}_{\kappa }^{\mathrm{T}}{\mathbf{R}}_{\kappa }^{\mathrm{T}}\right\}\\ +2\kappa \mathrm{Tr}\left\{\mathbf{P}\mathbf{A}{\mathbf{Q}}_{\kappa s}\mathrm{E}\left(\delta {\widehat{\mathbf{x}}}_{\kappa }{{\varvec{\upvarepsilon}}}^{\mathrm{T}}\right){\mathbf{R}}_{\kappa }^{\mathrm{T}}\right\}\end{array}$$
(C1)

with \(\delta {\widehat{\mathbf{x}}}_{\kappa }=\mathbf{x}-{\widehat{\mathbf{x}}}_{\kappa }\). It follows from the knowledge of linear algebra that (Strang and Borre 1997)

$$\mathrm{E}(\delta {\widehat{\mathbf{x}}}_{\kappa }\delta {\widehat{\mathbf{x}}}_{\kappa }^{\mathrm{T}})=\mathrm{D}(\delta {\widehat{\mathbf{x}}}_{\kappa })+\mathrm{E}(\delta {\widehat{\mathbf{x}}}_{\kappa }){\mathrm{E}(\delta {\widehat{\mathbf{x}}}_{\kappa })}^{\mathrm{T}}$$
(C2)

Inserting \(\mathrm{E}\left(\delta {\widehat{\mathbf{x}}}_{\kappa }\right)=\kappa {\mathbf{Q}}_{\kappa s}\mathbf{x}\) and \(\mathrm{D}\left(\delta {\widehat{\mathbf{x}}}_{\kappa }\right)={\sigma }_{0}^{2}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}\) yields

$$\mathrm{E}(\delta {\widehat{\mathbf{x}}}_{\kappa }\delta {\widehat{\mathbf{x}}}_{\kappa }^{\mathrm{T}})={\sigma }_{0}^{2}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}+{\kappa }^{2}{\mathbf{Q}}_{\kappa s}\mathbf{x}{\mathbf{x}}^{\mathrm{T}}{\mathbf{Q}}_{\kappa s}^{\mathrm{T}}$$
(C3)

It is rather easy to prove that \(\mathrm{E}\left(\delta {\widehat{\mathbf{x}}}_{\kappa }{{\varvec{\upvarepsilon}}}^{\mathrm{T}}\right)=-{\sigma }_{0}^{2}{\mathbf{N}}_{\kappa }^{-1}{\mathbf{A}}^{\mathrm{T}}\). Then

$$\begin{array}{c}\begin{array}{c}\Omega ={\sigma }_{0}^{2}{\kappa }^{2}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}{\mathbf{Q}}_{\kappa s}^{\mathrm{T}}\mathbf{N}\right\}-2{\sigma }_{0}^{2}\mathrm{Tr}\left\{{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}\right\}\\ +m{\sigma }_{0}^{2}-2{\sigma }_{0}^{2}\kappa \mathrm{Tr}\left\{{\mathbf{Q}}_{ks}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}-{\mathbf{Q}}_{ks}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}\right\}\end{array}\\ +{\sigma }_{0}^{2}\mathrm{Tr}\left\{{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}\right\}+{\kappa }^{4}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{2}\mathbf{x}{\mathbf{x}}^{\mathrm{T}}{\mathbf{Q}}_{\kappa s}^{2}\mathbf{N}\right\}\end{array}$$
(C4)

Substituting \({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}={\mathbf{I}}_{n}-\kappa {\mathbf{Q}}_{\kappa s}\) into (C4) yields the expectation of \({T}_{\kappa ,1}\) as

$$\mathrm{E}\left({T}_{\kappa ,1}\right)=q+{\kappa }^{4}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{4}\right\}+\frac{{\kappa }^{4}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{Q}}_{\kappa s}^{3}\mathbf{x}-{\kappa }^{5}{\mathbf{x}}^{\mathrm{T}}{\mathbf{S}}^{\mathrm{T}}{\mathbf{Q}}_{\kappa s}^{4}\mathbf{x}}{{\sigma }_{0}^{2}}$$
(C5)

Appendix D: The derivation of (39)

The non-centrality parameter \({\lambda }_{t}^{2}\) reads

$${\lambda }_{t}^{2}=\frac{{\mathbf{x}}^{\mathrm{T}}{\mathbf{A}}^{\mathrm{T}}{\mathbf{Q}}_{\mathbf{y}\mathbf{y}}^{-1/2}{{\varvec{\Pi}}}_{t}{\mathbf{Q}}_{\mathbf{y}\mathbf{y}}^{-1/2}\mathbf{A}\mathbf{x}}{{\sigma }_{0}^{2}}=\frac{{\mathbf{x}}^{\mathrm{T}}{\mathbf{A}}^{\mathrm{T}}{({\mathbf{R}}_{\kappa }^{t+1})}^{\mathrm{T}}\mathbf{P}{\mathbf{R}}_{\kappa }^{t+1}\mathbf{A}\mathbf{x}}{{\sigma }_{0}^{2}}$$
(D1)

In terms of \({\mathbf{R}}_{\kappa }\mathbf{A}=\kappa \mathbf{A}{\mathbf{Q}}_{\kappa s}\), we have the recursion

$${\mathbf{R}}_{\kappa }^{t+1}\mathbf{A}=\kappa {\mathbf{R}}_{\kappa }^{t}\mathbf{A}{\mathbf{Q}}_{\kappa s}={\kappa }^{t+1}\mathbf{A}{\mathbf{Q}}_{\kappa s}^{t+1}$$
(D2a)
$${\mathbf{A}}^{\mathrm{T}}{({\mathbf{R}}_{\kappa }^{t+1})}^{\mathrm{T}}=\kappa \mathbf{S}{\mathbf{N}}_{\kappa }^{-1}{\mathbf{A}}^{\mathrm{T}}{({\mathbf{R}}_{\kappa }^{t})}^{\mathrm{T}}={\kappa }^{t+1}\mathbf{S}{\mathbf{Q}}_{\kappa s}^{t}{\mathbf{N}}_{\kappa }^{-1}{\mathbf{A}}^{\mathrm{T}}$$
(D2b)

Substituting (D2a) and (D2b) into (D1) yields

$${\lambda }_{t}^{2}=\frac{{\kappa }^{2t+2}{\mathbf{x}}^{\mathrm{T}}\mathbf{S}{\mathbf{Q}}_{\kappa s}^{t}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{Q}}_{\kappa s}^{t+1}\mathbf{x}}{{\sigma }_{0}^{2}}$$
(D3)

Inserting \({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}={\mathbf{I}}_{n}-\kappa {\mathbf{Q}}_{\kappa s}\) into (D3) gives

$${\lambda }_{t}^{2}=\frac{{\kappa }^{2t+2}{\mathbf{x}}^{\mathrm{T}}\mathbf{S}{\mathbf{Q}}_{\kappa s}^{2t+1}\mathbf{x}}{{\sigma }_{0}^{2}}-\frac{{\kappa }^{2t+3}{\mathbf{x}}^{\mathrm{T}}\mathbf{S}{\mathbf{Q}}_{\kappa s}^{2t+2}\mathbf{x}}{{\sigma }_{0}^{2}}$$
(D4)

Appendix E: The derivation of (40)

With bias-corrected residual \({\mathbf{v}}_{\kappa ,\mathrm{t}}={\mathbf{R}}_{\kappa }^{t+1}\mathbf{A}\mathbf{x}+{\mathbf{R}}_{\kappa }^{t+1}{\varvec{\upvarepsilon}}\) of (37a), the quadratic form is

$$\mathrm{E}\left({\mathbf{v}}_{\kappa ,\mathrm{t}}^{\mathrm{T}}\mathbf{P}{\mathbf{v}}_{\kappa ,t}\right)={\kappa }^{2t+2}\mathrm{Tr}\left\{\mathbf{x}{\mathbf{x}}^{\mathrm{T}}\mathbf{S}{\mathbf{Q}}_{\kappa s}^{t}{\mathbf{N}}_{\kappa }^{-1}\mathbf{N}{\mathbf{Q}}_{\kappa s}^{t+1}\right\}+{\sigma }_{0}^{2}\mathrm{Tr}\left\{{\mathbf{R}}_{\kappa }^{2t+2}\right\}$$
(E1)

Inserting \({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}={\mathbf{I}}_{n}-\kappa {\mathbf{Q}}_{\kappa s}\) into (E1) gives the expectation of \({T}_{\kappa ,t}\) as

$$\mathrm{E}\left({T}_{\kappa ,t}\right)=\frac{{\kappa }^{2t+2}{\mathbf{x}}^{\mathrm{T}}\mathbf{S}{\mathbf{Q}}_{\kappa s}^{2t+1}\mathbf{x}}{{\sigma }_{0}^{2}}-\frac{{\kappa }^{2t+3}{\mathbf{x}}^{\mathrm{T}}\mathbf{S}{\mathbf{Q}}_{\kappa s}^{2t+2}\mathbf{x}}{{\sigma }_{0}^{2}}+\mathrm{Tr}\left\{{\mathbf{R}}_{\kappa }^{2t+2}\right\}$$
(E2)

Following the binomial theorem of matrix product (Schwartz 2012, pp.158–159), the expansion of \({\mathbf{R}}_{\kappa }^{p}\) is

$${\mathbf{R}}_{\kappa }^{p}={\mathbf{I}}_{m}+{\sum }_{r=1}^{p}\left(\begin{array}{c}p\\ r\end{array}\right){\left(-\mathbf{A}{\mathbf{N}}_{\kappa }^{-1}{\mathbf{A}}^{\mathrm{T}}\mathbf{P}\right)}^{r}$$
(E3)

Then, the trace of \({\mathbf{R}}_{\kappa }^{p}\) is

$$\mathrm{Tr}\left\{{\mathbf{R}}_{\kappa }^{p}\right\}=m+{\sum }_{r=1}^{p}\left(\begin{array}{c}p\\ r\end{array}\right){(-1)}^{r}\mathrm{Tr}\left\{{\left({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}\right)}^{r}\right\}=m+\fancyscript{l}$$
(E4)

where \(\left(\begin{array}{c}p\\ r\end{array}\right)={\prod }_{l=1}^{r}\frac{p-l+1}{l}\) is a binomial coefficient. Again applying the binomial theorem of matrix product yields

$$\mathrm{Tr}\left\{{\left({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}\right)}^{j}\right\}=\mathrm{Tr}\left\{{\left({\mathbf{I}}_{n}-\kappa {\mathbf{Q}}_{\kappa s}\right)}^{j}\right\}=n+{\sum }_{i=1}^{j}\left(\begin{array}{c}j\\ i\end{array}\right){\left(-\kappa \right)}^{i}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{i}\right\}$$
(E5)

and then

$$\fancyscript{l}={\sum }_{r=1}^{p}\left(\begin{array}{c}p\\ r\end{array}\right){(-1)}^{r}\left(n+{\sum }_{i=1}^{r}\left(\begin{array}{c}r\\ i\end{array}\right){\left(-\kappa \right)}^{i}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{i}\right\}\right)$$
(E6)

The summation of constant terms in (E6) reads

$${\fancyscript{l}}_{c}={\sum }_{r=1}^{p}\left(\begin{array}{c}p\\ r\end{array}\right){(-1)}^{r}n=n{\sum }_{r=0}^{p}{\left(-1\right)}^{r}\left(\begin{array}{c}p\\ r\end{array}\right)-n=-n$$
(E7)

The coefficient sum of \(\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{i}\right\}\) is (\(i>0\))

$${\fancyscript{l}}_{{\mathbf{Q}}_{\kappa s}^{i}}={\left(-\kappa \right)}^{i}{\sum }_{r=0}^{p}{(-1)}^{r}\left(\begin{array}{c}p\\ r\end{array}\right)\left(\begin{array}{c}r\\ i\end{array}\right)$$
(E8)

In terms of equation \(\left(\begin{array}{c}n\\ m\end{array}\right)\left(\begin{array}{c}m\\ k\end{array}\right)=\left(\begin{array}{c}n\\ k\end{array}\right)\left(\begin{array}{c}n-k\\ m-k\end{array}\right)\) for any \(k\le m\le n\) (Schwartz 2012), (E8) can be rewritten as

$$\begin{array}{c}{\fancyscript{l}}_{{\mathbf{Q}}_{\kappa s}^{i}}={\left(-\kappa \right)}^{i}{\sum }_{r=0}^{p}{(-1)}^{r}\left(\begin{array}{c}p\\ i\end{array}\right)\left(\begin{array}{c}p-i\\ r-i\end{array}\right)\\ ={\left(-\kappa \right)}^{i}\left(\begin{array}{c}p\\ i\end{array}\right){\sum }_{r=0}^{p}{(-1)}^{r}\left(\begin{array}{c}p-i\\ r-i\end{array}\right)\end{array}$$
(E9)

Inserting \(r=j+i\) into (E9) yields

$$\begin{array}{c}{\fancyscript{l}}_{{\mathbf{Q}}_{\kappa s}^{i}}={\left(-\kappa \right)}^{i}\left(\begin{array}{c}p\\ i\end{array}\right){\sum }_{j=-i}^{p-i}{(-1)}^{j+i}\left(\begin{array}{c}p-i\\ j\end{array}\right)\\ ={\left(-\kappa \right)}^{i}{(-1)}^{i}\left(\begin{array}{c}p\\ i\end{array}\right){\sum }_{j=0}^{p-i}{(-1)}^{j}\left(\begin{array}{c}p-i\\ j\end{array}\right)\end{array}$$
(E10)

Then we have

$${\fancyscript{l}}_{{\mathbf{Q}}_{\kappa s}^{i}}=\left\{\begin{array}{c}0, 1\le i<p\\ {\kappa }^{p}, i=p\end{array}\right.$$
(E11)

Finally, the trace of \({\mathbf{R}}_{\kappa }^{2t+2}\) reads

$$\mathrm{Tr}\left\{{\mathbf{R}}_{\kappa }^{2t+2}\right\}=q+{\kappa }^{2t+2}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{2t+2}\right\}$$
(E12)

Appendix F: The derivation of (50a, b)

To prove the convergence of \({\lambda }_{t}^{2}\) and \({b}_{{T}_{\kappa ,t}}\) with respect to t, we assume the regularization matrix \(\mathbf{S}={\mathbf{I}}_{n}\) without loss of generality. Since \({b}_{{T}_{\kappa ,t}}={\lambda }_{t}^{2}+{\kappa }^{2t+2}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{2t+2}\right\}\), it is equivalent to prove the convergence of \({\lambda }_{t}^{2}\) and \({q}_{t}={\kappa }^{2t+2}\mathrm{Tr}\left\{{\mathbf{Q}}_{\kappa s}^{2t+2}\right\}\).

Let the singular value decomposition of \(\kappa {\mathbf{Q}}_{\kappa s}\) as

$$\kappa {\mathbf{Q}}_{\kappa s}=\mathbf{U}{\varvec{\Lambda}}{\mathbf{U}}^{\mathrm{T}}$$
(F1)

where \(\mathbf{U}\) is an orthogonal matrix, and its ith column denotes by \({\mathbf{u}}_{i}\). \({\varvec{\Lambda}}=\mathrm{diag}([{\lambda }_{1},\dots ,{\lambda }_{n}])\) is a diagonal matrix consists of all n singular values. Since all singular values of the positive-definite matrix \({\mathbf{N}}_{\kappa }^{-1}\mathbf{N}={\mathbf{I}}_{n}-\kappa {\mathbf{Q}}_{\kappa s}\) are positive and smaller than 1 due to \(\kappa >0\), it follows \(0<{\lambda }_{i}<1\). With (F1), we rewrite \({q}_{t}\) and \({\lambda }_{t}^{2}\) as

$${q}_{t}=\mathrm{Tr}\left\{{\sum }_{i=1}^{n}{\lambda }_{i}^{2t+2}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{\mathrm{T}}\right\}={\sum }_{i=1}^{n}{\lambda }_{i}^{2t+2}$$
(F2)
$$\begin{array}{c}{\lambda }_{t}^{2}=\frac{\kappa }{{\sigma }_{0}^{2}}{\mathbf{x}}^{\mathrm{T}}\left({\sum }_{i=1}^{n}{\lambda }_{i}^{2t+2}(\frac{1}{{\lambda }_{i}}-1){\mathbf{u}}_{i}{\mathbf{u}}_{i}^{\mathrm{T}}\right)\mathbf{x}\\ =\frac{\kappa }{{\sigma }_{0}^{2}}{\sum }_{i=1}^{n}{\lambda }_{i}^{2t+2}(\frac{1}{{\lambda }_{i}}-1){{\mathbf{x}}^{\mathrm{T}}\mathbf{u}}_{i}{\mathbf{u}}_{i}^{\mathrm{T}}\mathbf{x}\end{array}$$
(F3)

The first and second-order derivatives of \({\lambda }_{t}^{2}\) and \({q}_{t}\) read

$$\left\{\begin{array}{c}\frac{\partial ({\lambda }_{t}^{2})}{\partial t}=\frac{2\kappa }{{\sigma }_{0}^{2}}{\sum }_{i=1}^{n}\mathrm{ln}\left({\lambda }_{i}\right){\lambda }_{i}^{2t+2}(\frac{1}{{\lambda }_{i}}-1){\mathbf{x}}^{\mathrm{T}}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{\mathrm{T}}\mathbf{x}<0\\ \frac{{\partial }^{2}({\lambda }_{t}^{2})}{\partial {t}^{2}}=\frac{4\kappa }{{\sigma }_{0}^{2}}{\sum }_{i=1}^{n}{\left(\mathrm{ln}\left({\lambda }_{i}\right)\right)}^{2}{\lambda }_{i}^{2t+2}(\frac{1}{{\lambda }_{i}}-1){\mathbf{x}}^{\mathrm{T}}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{\mathrm{T}}\mathbf{x}>0\end{array}\right.$$
(F4)
$$\left\{\begin{array}{c}\frac{\partial ({q}_{t})}{\partial t}=2{\sum }_{i=1}^{n}\mathrm{ln}\left({\lambda }_{i}\right){\lambda }_{i}^{2t+2}<0\\ \frac{{\partial }^{2}({q}_{t})}{\partial {t}^{2}}=4{\sum }_{i=1}^{n}{\left(\mathrm{ln}\left({\lambda }_{i}\right)\right)}^{2}{\lambda }_{i}^{2t+2}>0\end{array}\right.$$
(F5)

The above derivatives indicate that both \({\lambda }_{t}^{2}\) and \({q}_{t}\) are monotonically decreased with increasing correction times t. Moreover, for any t, both \({\lambda }_{t}^{2}\) and \({q}_{t}\) are larger than 0. Therefore, both \({\lambda }_{t}^{2}\) and \({q}_{t}\) will converge to 0 for sufficient large t.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, B., Wang, M. & Shen, Y. The hypothesis testing statistics in linear ill-posed models. J Geod 95, 11 (2021). https://doi.org/10.1007/s00190-020-01465-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00190-020-01465-6

Keywords

Navigation