Skip to main content
Log in

Reliability checking for GNSS baseline and network processing

  • Original Article
  • Published:
GPS Solutions Aims and scope Submit manuscript

An Erratum to this article was published on 23 July 2004

Abstract

Reliability analysis is inseparably connected with the formulation of failure scenarios, and common test statistics are based on specific assumptions. This is easily overlooked when processing observation differences. Poor failure identification performance and misleading pre-analysis results, mainly meaningless minimum detectable biases and external reliability measures, are the consequence. A reasonable failure scenario for use with differenced GNSS observations is formulated which takes into account that individual outliers in the original data affect more than one processed observation. The proper test statistics and reliability indicators are given for use with correlated observations and both batch processing and Kalman filtering. It is also shown that standardized residuals and redundancy numbers fail completely when used with double differenced observations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5a–c
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. Center for Orbit Determination in Europe, http://www.cx.unibe.ch/aiub/igs.html

  2. On larger networks, certain baseline selections may be more favorable in terms of ambiguity resolution than others, but as long as only float solutions are considered, or once the ambiguities have been fixed, a transition to another linear combination of the same UD does not change the results.

  3. For example, for the epoch 8:00, this is the length of the longest line emanating from A in Fig. 6.

  4. A more efficient procedure would always search for a maximum set of linear independent DD, thus occasionally introducing DD from additional ‘baselines’ into the adjustment.

  5. MathWorks, http://www.mathworks.com

References

  • Baarda W (1968) A testing procedure for use in geodetic networks, vol 2–5, new edn. Geodetic Commission, Delft, Netherlands

  • Caspary WF (1988) Concepts of network and deformation analysis. Monograph 11, School of Surveying, The University of New South Wales, Sydney, Australia

  • Koch KR, Yang Y (1998) Robust Kalman filter for rank deficient observations models. J Geod 72:436–441

    Article  Google Scholar 

  • Pope AJ (1975) The statistics of residuals and the detection of outliers. Department of Commerce, NOAA, National Ocean Survey, Technical Report NOS 65 NGS 1, Rockville, Maryland

  • Rothacher M, Springer T, Schaer S, Beutler G (1997) Processing strategies for regional GPS networks. In: Brunner FK (ed) Advances in positioning and reference frames, Proc. IAG Scientific Assembly, 3–9 September 1997, Rio de Janeiro, pp 93–100

  • Rousseeuw PJ, Leroy AM (1987) Robust regression and outlier detection. Wiley, New York

  • Sachs L (1997) Angewandte Statistik, 8th edn. Springer, Berlin Heidelberg New York, 884 pp

  • Salzmann MA (1990) MDB: a design tool for integrated navigation systems. In: Schwarz KP, Lachapelle G (eds) Proc. Kinematic Systems in Geodesy IAG Symposia, vol 107, Banff, Canada, 10–13 September 1990, pp 218–227

  • Schaffrin B (1997) Reliability measures for correlated observations. J Surv Eng 123:126–137

    Article  Google Scholar 

  • Teunissen PJG (1998a) Quality control and GPS. In: Teunissen PJG, Kleusberg A (eds) GPS for geodesy, 2nd edn. Springer, Berlin Heidelberg New York, pp 271–318

  • Teunissen PJG (1998b) Minimal detectable biases of GPS data. J Geod 72:236–244

    Article  Google Scholar 

  • Teunissen PJG, Salzmann MA (1989) A recursive slippage test for use in state-space filtering. Manuscript Geod 14:383–390

    Google Scholar 

  • Wang J, Chen Y (1994) On the reliability measure of observations. Acta Geod Cartogr Sinica 23:42–51

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Austrian Science Fund under research grant J2284-N04. I also thank F.K. Brunner, S. Verhagen, B.H.W. van Gelder, and an anonymous reviewer for their valuable comments on a draft.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Wieser.

Additional information

An erratum to this article can be found at http://dx.doi.org/10.1007/s10291-004-0106-6

Appendices

Appendix A: Resume of statistical reliability testing

This resume largely follows Teunissen (1998a), except for the notation and comments.

Batch processing

Mathematical model

Let the parameter estimation be performed in a linearized Gauß-Markov model of full rank, with the n-vector y of reduced observations, m-vector ξ of parameters, Jacobi matrix A and n-vector e of residuals:

$${\text{y}} = {\text{A}}\xi - {\text{e}}{\text{,}}\;\;\;E{\left\{ {\text{y}} \right\}} = {\text{A}}\xi ,\;\;\;{\mathbf{D}}{\left\{ {\text{y}} \right\}} = \Sigma$$
(8)

E{...} denotes the expectation operator, and D{...} the dispersion. The best linear unbiased estimate of the parameters, and the predicted residuals are obtained from

$$\hat{\xi } = {\left( {{\text{{A}'}}\Sigma ^{{ - 1}} {\text{A}}} \right)}^{{ - 1}} {\text{{A}'}}\Sigma ^{{ - 1}} {\text{y}} $$
(9)
$$ {\mathbf{\tilde{e}}} = - {\mathbf{Ry}} $$
(10)

with

$$ {\mathbf{R}} = {\mathbf{\Sigma }}_{{{\mathbf{\tilde{e}\tilde{e}}}}} {\mathbf{\Sigma }}^{{ - 1}} $$
(11)
$$ {\mathbf{\Sigma }}_{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e} \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}}} = {\mathbf{\Sigma }} - {\mathbf{A}}({\mathbf{A}}'{\mathbf{\Sigma }}^{{ - 1}} {\mathbf{A}})^{{ - 1}} {\mathbf{A}}' $$
(12)

The diagonal elements r i of R are called redundancy numbers because of

$$ tr{\mathbf{R}} = {\sum\limits_{i = 1}^n {{\mathbf{R}}_{{ii}} = } }{\sum\limits_{i = 1}^n {r_{i} = } }n - m $$
(13)

Failure detection

Failure detection within the model Eq. 8 is based on the null hypothesis

$$ {\text{H}}_{0} :{\text{E}}{\left\{ {\text{y}} \right\}} = {\text{A}}\xi ,\;\;\;{\text{D}}{\left\{ {\text{y}} \right\}} = \Sigma $$
(14)

and the test statistic

$$ T_{0} = \frac{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}'{\mathbf{\Sigma }}^{{ - 1}} {\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}}} {{n - m}} $$
(15)

If H0 is true, and if the observations are normally distributed T 0 is χ2-distributed. H0 is therefore rejected on the α confidence level i.e., a failure is assumed, if \( T_{0} > \chi ^{2}_{{n - m,1 - \alpha }} . \) This test is often called global model test (GT). It can be performed if nm≥1.

The variance covariance matrix (VCM) of the observations has been assumed known above (Eq. 8). If only the cofactor matrix can be established, Eq. (15) cannot be used. In this case either a prior estimate of the variance factor is required for scaling the cofactor matrix—the test can then be based on a Fisher distribution with two finite degrees of freedom—or the GT must be omitted. If the variance factor can only be estimated from data at hand, the GT cannot be performed.

Failure identification

The identification of a specific failure requires that H0 be tested against a specific alternative hypothesis

$${\text{H}}_{a} :\;{\text{E}}{\left\{ {\text{y}} \right\}} = {\text{A}}\xi + {\text{C}}\delta ,\;\;\;{\text{D}}{\left\{ {\text{y}} \right\}} = \Sigma$$
(16)

C models the influence of the q bias terms δ on the n observations. It is assumed that

  1. 1.

    δ is a fixed (be it unknown) bias

  2. 2.

    The observations are normally distributed in both cases (H0 and H a ).

Failure identification is performed within a set of possible scenarios, each of which must be expressed in terms of a matrix C (Eq. 16) and statistically tested against H0. If one of these tests ends in favor of H a , the related failure is identified. Clearly, it is essential that the scenarios tested are the likely ones.

Computationally these local tests rely on estimating the respective bias terms δ as additional parameters and checking for their statistical significance. The test statistic for the k-th scenario is

$$ T_{k} = \frac{1} {q}{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}'{\mathbf{\Sigma }}^{{ - 1}} {\mathbf{C}}_{k} {\left( {{\mathbf{C}}_{k} '{\mathbf{\Sigma }}^{{ - 1}} {\mathbf{\Sigma }}_{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e} \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}}} {\mathbf{\Sigma }}^{{ - 1}} {\mathbf{C}}_{k} } \right)}^{{ - 1}} {\mathbf{C}}_{k} '{\mathbf{\Sigma }}^{{ - 1}} {\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}} $$
(17)

It is χ2-distributed with q degrees of freedom. A redundancy of at least 2 is required for identification of a single outlier.

Simplified testing

In addition to the above assumptions 1 and 2, it is usually assumed that

  1. 3.

    Only one observation is contaminated in the sense of Eq. (16), and

  2. 4.

    The observations are uncorrelated i.e., Σ is diagonal.

In this case, the contamination is described by a scalar bias, each of the n observations is tested individually for this bias, and the test statistic can be simplified to yield the well-known standardized residuals

$$ T_{k} = \frac{{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}_{k} }} {{\sigma _{{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}_{k} }} }} = \frac{{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}_{k} }} {{\sigma _{k} {\sqrt {r_{k} } }}}\;{\text{with}}\;k = 1, \ldots ,n $$
(18)

Here, the index k denotes the respective observation, e k , σ k and r k the residual, a-priori standard deviation, and redundancy number (see Eq. 13). T k is normally distributed according to

$$ \begin{aligned} & T_{k} \left| {{\text{H}}_{0} } \right. \sim {\text{N}}{\left( {0,1} \right)} \\ & T_{k} \left| {{\text{H}}_{{ak}} } \right. \sim {\text{N}}{\left( {{\sqrt {\lambda _{k} } },1} \right)}\;{\text{with}}\;{\sqrt {\lambda _{k} } } = \frac{{{\sqrt {r_{k} } }}} {{\sigma _{k} }}\delta _{k} \\ \end{aligned} $$
(19)

So, H0 is rejected in favor of H ak , if

$$ {\left| {T_{k} } \right|} > {\text{z}}_{{1 - \alpha _{0} /2}} $$
(20)

with a suitably chosen probability α 0 of a type 1 error. Following Baarda’s (1968) suggestion, α 0 is tied to the type 1 error probabilities (α) and the power γ of the GT by the condition

$$ \chi ^{2}_{{f,1 - \alpha }} = \chi ^{2}_{{f,\lambda _{0} ,1 - \gamma }} \;{\text{with}}\;\lambda _{0} = {\left( {{\text{z}}_{{1 - \alpha _{0} /2}} - {\text{z}}_{{1 - \gamma _{0} }} } \right)}^{2} \;{\text{and}}\;\gamma _{0} = \gamma $$
(21)

This relation is based on the idea that a specific failure which is detected with the probability γ by the GT should yield a value T k greater than the threshold with the same probability. This is reasonable, once it has been assumed that there is only one outlier. Given the degree of freedom f=nm (determined by the functional model), and two of α, γ and α 0, the third one must be computed by evaluating Eq. (21). Note that this approach is totally different from the one suggested by Pope (1975) for the case where no GT is performed.

Testing correlated observations

If assumption 4 is invalid i.e., if the observations are correlated (Σ is not diagonal), the above simplifications are not possible. Rather, in this case, the value of

$$T_{k} = \frac{{\eta _{{(k)}} '\Sigma ^{{ - 1}} {\mathbf{\tilde{e}}}}} {{{\sqrt {\eta _{{(k)}} '\Sigma ^{{ - 1}} \Sigma _{{{\mathbf{\tilde{e}\tilde{e}}}}} \Sigma ^{{ - 1}} \eta _{{(k)}} } }}}$$
(22)

must be used to check for a single outlier in the k-th observation, where η (k) is the k-th canonical unit vector. It is readily verified that Eq. (22) equals Eq. (18) if Σ is diagonal. Still, T k is normally distributed, although with a different noncentrality in the case of H ak

$$\begin{aligned} T_{k} \left| {{\text{H}}_{0} } \right. \sim {\text{N}}(0,1) & \\ T_{k} \left| {{\text{H}}_{{ak}} } \right. \sim {\text{N}}{\left( {{\sqrt {\lambda _{k} } },1} \right)}\;{\text{with}}\;{\sqrt {\lambda _{k} } } = \delta _{k} \cdot {\sqrt {\eta '_{{(k)}} \cdot \Sigma ^{{ - 1}} \cdot \Sigma _{{{\mathbf{\tilde{e}\tilde{e}}}}} \cdot \Sigma ^{{ - 1}} \cdot \eta _{{(k)}} } } & \\ \end{aligned}$$
(23)

Testing observation differences

When processing differenced observations, assumption 3 does not generally hold, see main text above. In order to revalidate the assumption of a scalar bias term rather than a vector—and thus allow for a simple identification strategy—one must test for failures at the level of the orginal i.e., undifferenced observations (UD). This is indeed possible, by taking the transformation from UD to DD into account, even when processing e.g., double differenced observations (DD)

$$ {\mathbf{y}} = {\mathbf{D}} \cdot {\mathbf{y}}_{{{\text{UD}}}} $$
(24)

Here, D is the linear double-difference operator in matrix form. The k-th column d (k) of this matrix shows how the k-th UD contributes to all the DD. Correspondingly, it reveals how a scalar outlier δ k in the k-th UD contaminates all DD:

$${\mathbf{D}} \cdot {\left( {{\mathbf{y}}_{{{\text{UD}}}} + \eta _{{(k)}} \delta _{k} } \right)} = {\mathbf{y}} + {\mathbf{d}}_{{(k)}} \delta _{k}$$
(25)

The test statistic considering each individual UD as a possible outlier is then readily obtained from Eq. (17) as

$$ T_{k} = \frac{{{\mathbf{d}}_{{(k)}} '{\mathbf{\Sigma }}^{{ - 1}} {\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}}} {{{\sqrt {{\mathbf{d}}_{{(k)}} '{\mathbf{\Sigma }}^{{ - 1}} {\mathbf{\Sigma }}_{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e} \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}}} {\mathbf{\Sigma }}^{{ - 1}} {\mathbf{d}}_{{(k)}} } }}}\;{\text{with}}\;k \in {\left\{ {1, \ldots ,_{{{\text{UD}}}} } \right\}} $$
(26)

where n UD is the number of UD involved, and \( {\mathbf{\Sigma }},\;{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}},\;{\text{and}}\;{\mathbf{\Sigma }}_{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e} \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}}} \) are the same as above i.e., they refer to the DD model. Again, each of the T k is normally distributed, specifically

$$ \begin{aligned} & T_{k} \left| {{\text{H}}_{0} } \right. \sim {\text{N}}(0,1) \\ & T_{k} \left| {{\text{H}}_{{ak}} } \right. \sim {\text{N}}{\left( {{\sqrt {\lambda _{k} } },1} \right)}\;{\text{with}}\;{\sqrt {\lambda _{k} } } = \delta _{k} \cdot {\sqrt {{\mathbf{d}}'_{{(k)}} \cdot \Sigma ^{{ - 1}} \cdot \Sigma _{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e} \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}}} \cdot \Sigma ^{{ - 1}} \cdot {\mathbf{d}}_{{(k)}} } } \\ \end{aligned} $$
(27)

Statistical reliability

In the design phase of a network and for assessing the probability of correct failure detection, the statistical reliability is investigated. For each specific scenario formulated in terms of a matrix C k , it addresses (1) the minimum length of δ k (minimum detectable bias) which allows for a certain minimum probability of failure detection, and (2) the effect of an undetected bias on the estimated parameters. For practical purposes, only the latter may be of interest. However, the former is needed for the computations.

Simplified reliability analysis

In the simplified model of uncorrelated observations and individual outliers, the well-known equation for the MDB is

$$ {\left| {\nabla ^{{{\text{MDB}}}}_{k} } \right|} = \frac{{{\sqrt {\lambda _{0} } } \cdot \sigma _{k} }} {{{\sqrt {r_{k} } }}} $$
(28)

where the noncentrality \( {\sqrt {\lambda _{0} } } \) is computed from α 0 and γ 0 using Eq. (21).

The influence of an outlier on the estimated parameters is proportional to the size of the outlier, see Eq. (9). So, small outliers usually produce small effects. Large outliers produce serious effects but are easily detected. The most dangerous outliers are the ones with values close to MDB, which may just go unnoticed and yet have a significant influence. So, only their effect on the estimated parameters is computed to quantify the external reliability

$$\Delta \hat{\xi }_{{\nabla k}} = {\left( {{\text{A'}}\Sigma ^{{ - 1}} {\text{A}}} \right)}^{{ - 1}} {\text{A}}'\Sigma ^{{ - 1}} \eta _{{(k)}} {\left| {\nabla ^{{{\text{MDB}}}}_{k} } \right|}$$
(29)

Note that Eq. (29) produces an error vector for each of the MDB values i.e., for each possible individual outlier. With few observations, these vectors can be plotted (see e.g., Fig. 3) to give a clear idea, of what the effect of undetected failures on the estimated position might be, and which failures are most detrimental.

Reliability analysis of correlated observations

If the observations are correlated, the redundancy numbers are no longer bound to the interval [0, 1] and may even be negative (Wang and Chen, 1994). No change of Eq. 29 is necessary, but obviously, Eq. 28 cannot hold any more. The correct expression is given by (Schaffrin 1997) in a formally equivalent notation to Eq. 28 using normalized redundancy numbers. In a computationally less expensive form it reads:

$${\left| {\nabla ^{{{\text{MDB}}}}_{k} } \right|} = \frac{{{\sqrt {\lambda _{0} } }}} {{{\sqrt {\eta _{{(k)}} '\Sigma ^{{ - 1}} \Sigma _{{{\mathbf{\tilde{e}\tilde{e}}}}} \Sigma ^{{ - 1}} \eta _{{(k)}} } }}}$$
(30)

This equation can also be verified by comparison with Teunissen (1998a; p. 286) considering Cη (j).

Reliability analysis of observation differences

If we consider the individual failure to occur on the level of the UD, the MDB and the external reliability must be computed from

$$ {\left| {\nabla ^{{{\text{MDB}}}}_{k} } \right|} = \frac{{{\sqrt {\lambda _{0} } }}} {{{\sqrt {{\mathbf{d}}_{{(k)}} '{\mathbf{\Sigma }}^{{ - 1}} {\mathbf{\Sigma }}_{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e} \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}}} {\mathbf{\Sigma }}^{{ - 1}} {\mathbf{d}}_{{(k)}} } }}} $$
(31)
$$\Delta \hat{\xi }_{{\nabla i}} = {\left( {{\text{A}}'\Sigma ^{{ - 1}} {\text{A}}} \right)}^{{ - 1}} {\text{A}}'\Sigma ^{{ - 1}} {\text{d}}_{{(k)}} {\left| {\nabla ^{{{\text{MDB}}}}_{k} } \right|}$$
(32)

Appendix B: Kalman filter

The failure detection and identification in the scope of a Kalman filter can be performed formally equivalent to the above procedure. The main difference is that testing is possible at the level of individual epochs (local test) or the level of several epochs (global test), and can be restricted to failures in the observations or also include failures in the predicted states. Here, only local testing of the observations is considered. Furthermore, only the differences with respect to Appendix A are outlined. More information can be found e.g., in Teunissen (1998a) and Koch and Yang (1998).

Mathematical model

The reliability checking is based on the innovations

$${\mathbf{\tilde{e}}}_{t} = {\mathbf{y}}_{t} - {\mathbf{A}}_{t} \cdot \tilde{\xi }_{t} \;{\text{with}}\;E{\left\{ {{\mathbf{\tilde{e}}}_{t} } \right\}} = {\mathbf{0}}\;{\text{and}}\;D{\left\{ {{\mathbf{\tilde{e}}}_{t} } \right\}} = \Sigma _{t}$$
(33)

computed from the reduced observations y t at epoch t, the Jacobi matrix A t , and the predicted state vector \(\tilde{\xi }_{{\user2{t}}} .\) This particular notation is chosen to underline the formal correspondence to the equations in Appendix A.

Failure detection

As above, it is assumed that the innovations are normally distributed. The overall model test for failure detection at the epoch level is performed using

$${\text{H}}_{0} :\;E{\left\{ {{\mathbf{y}}_{t} } \right\}} = {\mathbf{A}}_{t} \tilde{\xi }_{t} ,\;D{\left\{ {{\mathbf{y}}_{t} } \right\}} = \Sigma _{t}$$
(34)
$$ T_{0} = \frac{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}_{t} '{\mathbf{\Sigma }}^{{ - 1}}_{t} {\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}_{t} }} {{n_{t} }} $$
(35)

If H0 is true, T 0 is χ2-distributed with n t degrees of freedom, where n t is the number of observations at epoch t.

Failure identification

For failure identification, each of the failure scenarios considered must be expressed as an alternative hypothesis in terms of a matrix C and a bias δ

$${\text{H}}_{{ak}} :E{\left\{ {{\mathbf{y}}_{t} } \right\}} = {\mathbf{A}}_{t} \tilde{\xi }_{t} + {\mathbf{C}}_{k} \delta _{k} ,\;D{\left\{ {{\mathbf{y}}_{t} } \right\}} = \Sigma _{t}$$
(36)

H0 is tested against H ak using the test statistic

$$ T_{k} = \frac{1} {q}{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}_{t} '{\mathbf{\Sigma }}^{{ - 1}}_{t} {\mathbf{C}}_{k} {\left( {{\mathbf{C}}_{k} '{\mathbf{\Sigma }}^{{ - 1}}_{t} {\mathbf{C}}_{k} } \right)}^{{ - 1}} {\mathbf{C}}_{k} '{\mathbf{\Sigma }}^{{ - 1}}_{t} {\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}_{t} $$
(37)
$$ \begin{aligned} & T_{k} \left| {{\text{H}}_{0} } \right. \sim \chi ^{2} {\left( {n_{t} } \right)} \\ & T_{k} \left| {{\text{H}}_{{ak}} } \right. \sim \chi ^{2} {\left( {\lambda _{k} ,n_{t} } \right)}\;{\text{with}}\;\lambda _{k} = \delta _{k} '{\mathbf{C}}_{k} '\Sigma ^{{ - 1}}_{t} {\mathbf{C}}_{k} \delta _{k} \\ \end{aligned} $$
(38)

Simplified testing

If the scenario investigated is a single outlier contaminating the k-th uncorrelated observation, the test statistic is a function of the corresponding residual and a-priori standard deviation

$$ T_{{t,k}} = \frac{{{\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}_{{t,k}} }} {{\sigma _{{t,k}} }}\;{\text{with}}\;k \in {\left\{ {1, \ldots ,n_{t} } \right\}} $$
(39)
$$ \begin{aligned} & T_{{t,k}} \left| {{\text{H}}_{0} } \right. \sim {\text{N}}(0,1) \\ & T_{{t,k}} \left| {{\text{H}}_{{ak}} } \right. \sim {\text{N}}{\left( {{\sqrt {\lambda _{k} } },1} \right)}\;{\text{with}}\;{\sqrt {\lambda _{k} } } = \frac{{\delta _{k} }} {{\sigma _{{t,k}} }} \\ \end{aligned} $$
(40)

Testing correlated observations

For a single outlier among correlated observations, the test statistic is

$$T_{{t,k}} = \frac{{\eta _{{(k)}} '\Sigma ^{{ - 1}}_{t} {\mathbf{\tilde{e}}}^{{}}_{t} }} {{{\sqrt {\eta _{{(k)}} '\Sigma ^{{ - 1}}_{t} \eta _{{(k)}} } }}}\;{\text{with}}\;k \in {\left\{ {1, \ldots ,n_{t} } \right\}}$$
(41)
$$\begin{aligned} T_{{t,k}} \left| {{\text{H}}_{0} } \right. \sim {\text{N}}(0,1) & \\ T_{{t,k}} \left| {{\text{H}}_{{ak}} } \right. \sim {\text{N}}{\left( {{\sqrt {\lambda _{k} } },1} \right)}\;{\text{with}}\;{\sqrt {\lambda _{k} } } = {\sqrt {\eta '_{{(k)}} \cdot \Sigma ^{{ - 1}}_{t} \cdot \eta _{{(k)}} } }\delta _{k} & \\ \end{aligned}$$
(42)

Testing observation differences

If the scenario investigated is a single outlier contaminating the k-th undifferenced observation, the test statistic is

$$ T_{{t,k}} = \frac{{{\mathbf{d}}_{{(k)}} '{\mathbf{\Sigma }}^{{ - 1}}_{t} {\mathbf{ \ifmmode\expandafter\tilde\else\expandafter\~\fi{e}}}_{t} }} {{{\sqrt {{\mathbf{d}}_{{(k)}} '{\mathbf{\Sigma }}^{{ - 1}}_{t} {\mathbf{d}}_{{(k)}} } }}}\;{\text{with}}\;k \in {\left\{ {1, \ldots ,n_{{{\text{UD}},t}} } \right\}} $$
(43)
$$ \begin{aligned} & T_{{t,k}} \left| {{\text{H}}_{0} } \right. \sim {\text{N}}(0,1) \\ & T_{{t,k}} \left| {{\text{H}}_{{ak}} } \right. \sim {\text{N}}{\left( {{\sqrt {\lambda _{k} } },1} \right)}\;{\text{with}}\;{\sqrt {\lambda _{k} } } = {\sqrt {{\mathbf{d}}'_{{(k)}} \cdot {\mathbf{\Sigma }}^{{ - 1}}_{t} \cdot {\mathbf{d}}_{{(k)}} } }\delta _{k} \\ \end{aligned} $$
(44)

where n UD,t is the number of undifferenced observations of this epoch, and d (k) is the k-th column of the matrix difference operator (see Eq. 24).

Statistical reliability

Simplified reliability analysis

The MDB and the external reliability of the k-th uncorrelated observation are

$$ {\left| {\nabla ^{{{\text{MDB}}}}_{{t,k}} } \right|} = {\sqrt {\lambda _{0} } } \cdot \sigma _{{t,k}} $$
(45)
$$\Delta \hat{\xi }_{{t,\nabla i}} = {\mathbf{K}}_{t} \cdot \eta _{{(i)}} \cdot {\left| {\nabla ^{{{\text{MDB}}}}_{{t,k}} } \right|}$$
(46)

where K t is the Kalman gain matrix at epoch t.

Reliability analysis of correlated observations

The minimum detectable bias of the k-th correlated observation is

$${\left| {\nabla ^{{{\text{MDB}}}}_{{t,k}} } \right|} = \frac{{{\sqrt {\lambda _{0} } }}} {{{\sqrt {\eta _{{(k)}} '\Sigma ^{{ - 1}}_{t} \eta _{{(k)}} } }}}$$
(47)

Its external reliability can be computed using Eq. 46.

Reliability analysis of observation differences

The MDB and external reliability of the k-th undifferenced observation can be computed from

$$ {\left| {\nabla ^{{{\text{MDB}}}}_{{t,k}} } \right|} = \frac{{{\sqrt {\lambda _{0} } }}} {{{\sqrt {{\mathbf{d}}_{{(k)}} '\Sigma ^{{ - 1}}_{t} {\mathbf{d}}_{{(k)}} } }}} $$
(48)
$$\Delta \hat{\xi }_{{t,\nabla i}} = {\mathbf{K}}_{t} {\mathbf{d}}_{{(k)}} {\left| {\nabla ^{{{\text{MDB}}}}_{{t,k}} } \right|}$$
(49)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wieser, A. Reliability checking for GNSS baseline and network processing. GPS Solutions 8, 55–66 (2004). https://doi.org/10.1007/s10291-004-0091-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10291-004-0091-9

Keywords

Navigation