Introduction

In kinematic GNSS positioning, the Kalman filtering technique is widely used to produce optimal state estimation when the functional and stochastic models are correct. In practice, however, even with correctly defined functional models for the state and observation equations, appropriate stochastic modeling for pseudorange and carrier-phase measurements is not trivial, particularly for real-time applications (Wang et al. 2000). Any insufficient accuracy in modeling has the potential to lead to filter divergence. The state-of-the-art procedure to deal with unknown stochastic models in the estimation process is called adaptive Kalman filtering, for which two main approaches exist, namely multiple model-based adaptive estimation (MMAE) and innovation-based adaptive estimation (IAE), in which the latter adaptation is more suitable for an integrated inertial navigation system/global positioning system (INS/GPS) (Mohamed and Schwarz 1999). To apply the adaptive Kalman filtering technique in real-time GPS kinematic positioning, a precise estimator of the measurement noise Variance–Covariance (VC) matrix has been suggested by Wang et al. (1998). This approach does not require intensive computations, and the measurement noise VC matrix can be estimated directly and is ensuring a positive definite measurement noise VC matrix (Wang 1999).

In standard kinematic GNSS processing, the VC matrix for differenced measurements is typically representing the variance of code- and carrier-phase measurements weighted by the satellite elevation angle. However, such a stochastic model might not be appropriate for challenging environments, such as urban areas. Wang (1999) has shown that the estimated measurement noise VC matrix, based on a posteriori residuals of both measurements and state predictions rather than only residuals corresponding to measurements (Hewitson and Wang 2007), can significantly improve the reliability of ambiguity resolution and the accuracy of kinematic positioning. However, due to carrier-phase outages, i.e., only pseudorange measurements are available, the convergence of the Kalman filtering may be perturbed, degrading the filter performance with fewer valid Double-Differenced (DD) measurements. In addition, the optimal choice of the estimation window width depends strongly on trajectory dynamics (Mohamed and Schwarz 1999), leaving the risk that the DD measurement noise VC matrix contains incorrectly predicted stochastic information. Thus, robust estimation should be utilized to obtain solutions with high accuracy and reliability (Liu et al. 2019).

In this work, we study how to deal with the degraded reliability and robustness of GPS-only real-time kinematic (RTK) positioning in complex environments when measurements are contaminated or interrupted. The performance of the suggested algorithm is evaluated against a Kalman filtering procedure implemented with the Gauss–Markov model. Besides, the F- and W-ratio tests (Frei and Beutler 1990; Wang et al. 1998) for validating integer ambiguity estimation are investigated. Their performance is analyzed based on real data collected in urban areas. Additionally, in a simulation test, the accuracy and robustness of the Kalman filter are investigated based on a reference trajectory, which has been artificially contaminated with normally distributed measurement outliers. Finally, the results of both simulated and realistic GPS single-frequency datasets indicate that a better performance of RTK positioning can be achieved by utilizing the proposed method.

Kalman filter

In general, the Kalman filtering algorithm comprises two steps, prediction and update, as follows.

Prediction:

$$\begin{array}{*{20}c} {\hat{\varvec{x}}_{n|n - 1} = {{\varvec{\Phi}}}_{n|n - 1} \hat{\varvec{x}}_{n - 1|n - 1} } \\ \end{array}$$
(1)
$$\begin{array}{*{20}c} {{\varvec{P}}_{n|n - 1} = {{\varvec{\Phi}}}_{n|n - 1}{\varvec{P}}_{n - 1|n - 1}{{\varvec{\Phi}}}_{n|n - 1}^{T} + {\varvec{Q}}_{n} } \\ \end{array}$$
(2)

Updating:

$$\begin{array}{*{20}c} {{\varvec{K}}_{n} = {\varvec{P}}_{n|n - 1} {\varvec{H}}_{n}^{T} \left( {{\varvec{H}}_{n}\, {\varvec{P}}_{n|n - 1}{\varvec{H}}_{n}^{T} + {\varvec{R}}_{n} } \right)^{ - 1} } \\ \end{array}$$
(3)
$$\begin{array}{*{20}c} {\hat{\varvec{x}}_{n|n} = \hat{\varvec{x}}_{n|n - 1} + {\varvec{K}}_{n} \left( {{\varvec{z}}_{n} - {\varvec{H}}_{n} \hat{\varvec{x}}_{n|n - 1} } \right) } \\ \end{array}$$
(4)
$$\begin{array}{*{20}c} {{\varvec{P}}_{n|n} = \left( {{\varvec{I}} - {\varvec{K}}_{n} {\varvec{H}}_{n} } \right){\varvec{P}}_{n|n - 1} } \\ \end{array}$$
(5)

where \({\widehat{{\varvec{x}}}}_{n\mid n-1}\) and \({{\varvec{P}}}_{n\mid n-1}\) represent the state vector and its VC matrix predicted from the previous epoch \(n-1\) to the present epoch \(n\); \({\widehat{{\varvec{x}}}}_{n\mid n}\) is the updated state vector at epoch \(n\); \({{\varvec{P}}}_{n\mid n}\) is its corresponding VC matrix; \({{\varvec{K}}}_{n}\) is the Kalman gain matrix. In the “predict–update” loop of the Kalman filter, possible round-off errors due to different precision magnitudes of state parameters in computing \({{\varvec{P}}}_{n\mid n}\) can be accumulated, then lead to non-symmetric matrices, even let the state covariance matrix lost the positive-definiteness. Considering the numerical stability, the Joseph stabilized version of the covariance correction equation described as (Simon 2006; Chang 2014)

$$\begin{array}{*{20}c} {{\varvec{P}}_{n|n} = \left( {{\varvec{I}} - {\varvec{K}}_{n} {\varvec{H}}_{n} } \right) {\varvec{P}}_{n|n - 1} \left( {{\varvec{I}} - {\varvec{K}}_{n} {\varvec{H}}_{n} } \right)^{T} + {\varvec{K}}_{n} {\varvec{R}}_{n} {\varvec{K}}_{n}^{T} } \\ \end{array}$$
(6)

is the one that should be used to guarantee that the updated VC matrix is symmetric and positive definite.

Conventional Kalman filter

In conventional Kalman filtering, the VC matrix of DD measurements is expressed as (Takasu and Yasuda 2009)

$$\begin{array}{*{20}c} {{\varvec{R}}_{n} = \left( {\begin{array}{*{20}l} {{\varvec{D}} \cdot \sigma_{{L,{\text{SD}}}}^{2} \cdot {\varvec{D}}^{T} } \hfill & \hfill \hfill \varvec{0} \hfill \\ \hfill \hfill \varvec{0} \hfill & {{\varvec{D}} \cdot \sigma_{{P,{\text{SD}}}}^{2} \cdot {\varvec{D}}^{T} } \hfill \\ \end{array} } \right)} \\ \end{array}$$
(7)

where \({\varvec{D}}\) is the single-differenced (SD) matrix; \({\sigma }_{L,\mathrm{SD}}^{2}\) and \({\sigma }_{P,\mathrm{SD}}^{2}\) represent the variance of the carrier- and code-phase measurements, respectively, since the carrier-phase noise is roughly 100 times smaller than the pseudorange noise (Kee et al. 1997), for which the variance can be calculated as follows

$$\begin{array}{*{20}c} {\sigma_{{P,{\text{SD}}}}^{2} = 100^{2} \cdot \sigma_{{L,{\text{SD}}}}^{2} = 100^{2} \cdot 2 \cdot \sigma_{L}^{2} } \\ \end{array}$$
(8)

where \(\sigma_{L}^{2}\) denotes the variance of the undifferenced carrier-phase measurement, which can be approximated as inverse proportional to the sine of satellite elevation angle as (Xi et al. 2018)

$$\begin{array}{*{20}c} {\sigma_{L}^{2} = a^{2} + \frac{{b^{2} }}{{sin^{2} \left( E \right)}}} \\ \end{array}$$
(9)

where \(E\) is the elevation angle of the satellite; \(a\) and \(b\) are the carrier-phase error factors, which are empirically set to \(3\) mm.

Adaptive estimation of the measurement noise matrix

Basically, there are four main approaches for adaptive Kalman filtering—Bayesian, maximum likelihood (ML), correlation, and covariance matching (Mehra 1972). The Bayesian and ML estimation methods are time-consuming, while the correlation method is only suitable when the design matrix has invariant elements (Wang 1999). Considering real-time processing, it is more efficient to apply the covariance-matching technique. In this paper, the state vector contains the position and velocity components of the rover \(\left({r}_{x},{r}_{y},{r}_{z},{\dot{r}}_{x},{\dot{r}}_{y},{\dot{r}}_{z}\right)\) and its corresponding process noise matrix \({{\varvec{Q}}}_{n}\) [see (2)] can be expressed according to Schwarz et al. (1989) and Yang et al. (2001) as

$$\begin{array}{*{20}c} {{\varvec{Q}}_{n} = \sigma^{2} \cdot \left( {\begin{array}{*{20}l} {\frac{1}{3}\Delta t^{3} } \hfill & 0 \hfill & 0 \hfill & {\frac{1}{2}\Delta t^{2} } \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & {\frac{1}{3}\Delta t^{3} } \hfill & 0 \hfill & 0 \hfill & {\frac{1}{2}\Delta t^{2} } \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & {\frac{1}{3}\Delta t^{3} } \hfill & 0 \hfill & 0 \hfill & {\frac{1}{2}\Delta t^{2} } \hfill \\ {\frac{1}{2}\Delta t^{2} } \hfill & 0 \hfill & 0 \hfill & {\Delta t} \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & {\frac{1}{2}\Delta t^{2} } \hfill & 0 \hfill & 0 \hfill & {\Delta t} \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & {\frac{1}{2}\Delta t^{2} } \hfill & 0 \hfill & 0 \hfill & {\Delta t} \hfill \\ \end{array} } \right)} \\ \end{array}$$
(10)

where \({\sigma }^{2}\) is the spectral density for velocities; \(\Delta t\) represents the sampling time interval. In this case, the estimation of measurement noise VC matrix \({\varvec{R}}\) can be handled more successfully by covariance-matching (Mehra 1972).

The concept of the covariance matching approach is based on the idea of obtaining elements of the actual innovation VC matrix which are consistent with their theoretical values (Mehra 1972; Maybeck 1982; Wang 1999). The innovation sequence or predicted residual vector \({\varvec{v}}_{n}\) can be expressed as

$$\begin{array}{*{20}c} {{\varvec{v}}_{n} = {\varvec{z}}_{n} - {\varvec{H}}_{n} \hat{\varvec{x}}_{n|n - 1} . } \\ \end{array}$$
(11)

While the theoretical innovation VC matrix is presented by

$$\begin{array}{*{20}c} {{\varvec{C}}_{{{\varvec{v}}_{n} }} = {\varvec{R}}_{n} + {\varvec{H}}_{n} {\varvec{P}}_{n|n - 1} {\varvec{H}}_{n}^{T} .} \\ \end{array}$$
(12)

The actual VC matrix of \({\varvec{v}}_{n}\) is approximated by its sample covariance (Mehra 1972; Mohamed and Schwarz 1999; Almagbile et al. 2010), i.e.,

$$\begin{array}{*{20}c} {\hat{\varvec{C}}_{{{\varvec{v}}_{n} }} = \frac{1}{m}\mathop \sum \limits_{i = 1}^{m} {\varvec{v}}_{n - i} {\varvec{v}}_{n - i}^{T} } \\ \end{array}$$
(13)

where \(m\) is the size of an empirically selected window width at epoch \(n\).

Inserting (13) into (12), the innovation-based adaptive estimate of the measurement noise VC matrix is obtained and written as

$$\begin{array}{*{20}c} {\hat{\varvec{R}}_{n} = \frac{1}{m}\mathop \sum \limits_{i = 1}^{m} {\varvec{v}}_{n - i} {\varvec{v}}_{n - i}^{T} - {\varvec{H}}_{n} {\varvec{P}}_{n|n - 1} {\varvec{H}}_{n}^{T} .} \\ \end{array}$$
(14)

However, due to the subtraction between two positive definited matrices, it is not guaranteed that a positive definite matrix \({\varvec{R}}\) can be obtained. Therefore, the integrated measurement model using the least squares principle is performed by Wang et al. (1998) for which the Gauss–Markov model is described as

$$\begin{array}{*{20}c} {{\varvec{l}}_{n} = {\varvec{A}}_{n} \hat{\varvec{x}}_{n|n - 1} + {\varvec{v}}_{n} } \\ \end{array}$$
(15)

where

$$\begin{array}{*{20}c} {{\varvec{l}}_{n} = \left( {\begin{array}{*{20}l} {{\varvec{z}}_{n} } \hfill \\ {\hat{\varvec{x}}_{n|n - 1} } \hfill \\ \end{array} } \right); \quad {\varvec{A}}_{n} = \left( {\begin{array}{*{20}l} {{\varvec{H}}_{n} } \hfill \\ {{\varvec{I}}_{n} } \hfill \\ \end{array} } \right); \quad {\varvec{v}}_{n} = \left( {\begin{array}{*{20}l} {{\varvec{v}}_{{{\varvec{z}}_{n} }} } \hfill \\ {{\varvec{v}}_{{\hat{\varvec{x}}_{n|n - 1} }} } \hfill \\ \end{array} } \right) } \\ \end{array}$$

together with the stochastic model, which is given as

$$\begin{array}{*{20}c} {{\varvec{C}}_{{{\varvec{l}}_{n} }} = \left( {\begin{array}{*{20}l} {{\varvec{R}}_{n} } \hfill & {\mathbf{0}} \hfill \\ {\mathbf{0}} \hfill & {{\varvec{P}}_{n|n - 1} } \hfill \\ \end{array} } \right).} \\ \end{array}$$
(16)

The optimal estimator of the state parameters can be determined by

$$\begin{array}{*{20}c} {\hat{\varvec{x}}_{n|n - 1}^{*} = \left( {{\varvec{A}}_{n}^{T} {\varvec{C}}_{{{\varvec{l}}_{n} }}^{ - 1} {\varvec{A}}_{n} } \right)^{ - 1} {\varvec{A}}_{n}^{T} {\varvec{C}}_{{{\varvec{l}}_{n} }}^{ - 1} {\varvec{l}}_{n} .} \\ \end{array}$$
(17)

\({\varvec{C}}_{{{\varvec{l}}_{n} }}\) and \({\varvec{C}}_{{{\varvec{l}}_{n} }}^{ - 1}\) denote the VC matrix and weight matrix of the measurement vector \({\varvec{l}}_{n}\). Residuals \({\varvec{v}}_{n}\) are divided into residuals of measurements \({\varvec{v}}_{{{\varvec{z}}_{n} }}\) and predicted states \({\varvec{v}}_{{\hat{\varvec{x}}_{n|n - 1} }}\). Both a posteriori residuals are calculated as follows

$$\begin{array}{*{20}c} {{\varvec{v}}_{n} = {\varvec{l}}_{n} - {\varvec{A}}_{n} \hat{\varvec{x}}_{n|n - 1}^{*} } \\ \end{array}$$
(18)
$${\text{i}}.{\text{e}}.\quad \left( {\begin{array}{*{20}c} {{\varvec{v}}_{{{\varvec{z}}_{n} }} } \\ {{\varvec{v}}_{{\hat{\varvec{x}}_{n|n - 1} }} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {{\varvec{z}}_{n} } \\ {\hat{\varvec{x}}_{n|n - 1} } \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} {{\varvec{H}}_{n} } \\ {{\varvec{I}}_{n} } \\ \end{array} } \right)\hat{\varvec{x}}_{n|n - 1}^{*} .$$

Thus, using the error propagation law, the VC matrix of the measurement residuals can be derived as

$$\begin{array}{*{20}c} {{\varvec{C}}_{{{\varvec{v}}_{{{\varvec{z}}_{n} }} }} = {\varvec{R}}_{n} - {\varvec{H}}_{n} {\varvec{P}}_{n|n - 1}^{*} {\varvec{H}}_{n}^{T} .} \\ \end{array}$$
(19)

Then, matching the approximately computed actual VC matrix \(\hat{\varvec{C}}_{{{\varvec{v}}_{{{\varvec{z}}_{n} }} }}\) of the innovation sequence with its theoretical form \({\varvec{C}}_{{{\varvec{v}}_{{{\varvec{z}}_{n} }} }}\), the positive definite VC matrix of the measurement noise can be estimated according to Wang (1999) as

$$\begin{array}{*{20}c} {\hat{\varvec{R}}_{n} = \frac{1}{m}\mathop \sum \limits_{i = 1}^{m} {\varvec{v}}_{{{\varvec{z}}_{n - i} }} {\varvec{v}}_{{{\varvec{z}}_{n - i} }}^{T} + {\varvec{H}}_{n} {\varvec{P}}_{n|n - 1}^{*} {\varvec{H}}_{n}^{T} .} \\ \end{array}$$
(20)

The goodness of this approach strongly depends on the proper choice of the moving window width length \(m\). At epoch \(n\), only measurement residuals from previous \(m\) epochs are considered. In practice, a filter divergence may occur if the window size is smaller than the number of update measurements (Mehra 1972). If the window size is sufficiently large, up to the length of the dataset, the adaptive filter behaves identically to a conventional filter. It should be noted that the application of the adaptively estimated measurement noise matrix implicitly introduces a time-correlated VC matrix. While this should not pose a problem for the actual RTK algorithms, it could happen that elements of the VC are too large or too small, leading to too optimistic/pessimistic formal errors. Although very unlikely, it could therefore happen that wrong integer ambiguity solutions are wrongly accepted or rejected. However, as the impact is expected to be of minor order, this shortcoming is not studied further and the above suggested method is applied in data processing presented in the following sections.

Robust estimation based on the IGG III method

If an outlier occurs, the variance of the affected observation should be inflated to reduce its weight (Yang et al. 2002). By such an adaptation of the VC matrix or weight matrix, the robust Kalman filter outperforms the conventional one. To maintain the intrinsic correlation among all observations following the matrix adjustment, it is required to construct an equivalent VC matrix or weight matrix. The weight element \({\overline{p} }_{ij}\) of a robust equivalent weight matrix \(\overline{\varvec{P}}\) can be expressed according to Gui and Zhang (1998); Yang (1999); Yang et al. (2002) as

$$\begin{array}{*{20}c} {\overline{p}_{ij} = \gamma_{ij} \cdot p_{ij} } \\ \end{array}$$
(21)

where \(\gamma_{ij} = \sqrt {\gamma_{ii} \cdot \gamma_{jj} }\) is the weight reduction factor; \(\gamma_{ii}\) and \(\gamma_{jj}\) are called “bifactor.”

Considering the standardized residuals \(\tilde{\varvec{v}}_{i} = \frac{{{\varvec{v}}_{i} }}{{\upsigma }}\), the reduction factor for the weight element is given as (Yang 1999; Yang et al. 2002)

$$\begin{array}{*{20}c} {\gamma _{{ii}} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {\left| { {\tilde{v}}_{i} } \right| < c_{0} } \hfill \\ {\frac{{c_{0} }}{{\left| {{\tilde{v}}_{i} } \right|}}\left( {\frac{{c_{1} - \left| {{\tilde{v}}_{i} } \right|}}{{c_{1} - c_{0} }}} \right)^{2} ,~} \hfill & {~c_{0} < \left| {{\tilde{v}}_{i} } \right| \le c_{1} } \hfill \\ {0,} \hfill & {\left| {{\tilde{v}}_{i} } \right| > c_{1} } \hfill \\ \end{array} } \right.} \\ \end{array}$$
(22)

where \({c}_{0}\) and \({c}_{1}\) are two empirical constants, which are usually chosen between \(1.0\) and \(1.5\), and \(2.5\) and \(3.0\), respectively (Gui and Zhang 1998). In the Kalman filtering approach formulated with the Gauss–Markov model (cf. “Adaptive estimation of the measurement noise matrix” section), the IGG (Institute of Geodesy and Geophysics) III scheme with \({c}_{0}=1.0\) and \({c}_{1}=2.5\) is utilized. Based on (15), standardized residuals can be used to detect and identify outliers. Since the number of states including position and velocity is constant, the rejected segment (\(\left|{\widetilde{{\varvec{v}}}}_{i}\right|>{c}_{1}\)) should be handled with caution. These state values cannot be eliminated, even if their weights are decreased to zero. Besides, to improve the robustness in adaptive estimation of \({\varvec{R}}\), the integrated VC matrix \({{\varvec{C}}}_{{{\varvec{l}}}_{n}}\) in (16) is replaced by the robustly estimated VC matrix composed of both measurements and predicted states. And the correlation of DD measurements should be unaffected by the variance inflation factor \({\lambda }_{ij}\) defined as (Liu et al. 2019)

$$\begin{array}{*{20}c} {{\uplambda }_{ij} = \frac{1}{{{\upgamma }_{ij} }}.} \\ \end{array}$$
(23)

To ensure the stability of filtering and the equivalence of the VC matrix, \({\gamma }_{ij}\) is set to \({10}^{-5}\) for \(\left|{\widetilde{{\varvec{v}}}}_{i}\right|>{c}_{1}\), as depicted in Fig. 1.

Fig. 1
figure 1

Curve of the reduction factor \({\gamma }_{ij}\) for the weight elements (Yang et al. 2002)

As a result of this modification, large outliers will not be simply neglected, but remain in the observation system with very low weight. Additionally, the intrinsic correlation of measurements and predicted states will also be unaltered.

Integer ambiguity resolution

Actually, only the fractional phase can be measured for carrier phases, and each carrier phase contains an unknown integer number of full cycles (Verhagen and Teunissen 2006). Hence, the integer ambiguity resolution, which is the process of resolving unknown DD carrier-phase ambiguities, is crucial for high-precision GNSS applications. Despite the fact that the master satellite has the highest elevation mask, it is still possible that its carrier-phase measurements are blocked, or after a long observation time, its elevation angle is not dominant, so that the hand-over problem of the master satellite occurs. In this case, the DD form of carrier-phase ambiguities is hard to distinguish it from cycle slips, which can result in an inappropriate initialization for the corresponding ambiguity. To avoid this problem, we apply the SD form rather than the DD form for carrier-phase ambiguities in the state vector.

Ambiguity resolution procedure

Conceptually, the ambiguity resolution procedure includes four steps:

  1. (1)

    Computation of the float solution

  2. (2)

    Estimation of integer ambiguities

  3. (3)

    Ambiguity validation test

  4. (4)

    Computation of the fixed solution

First, SD ambiguities and other unknown state parameters are estimated by the Kalman filter. In our dynamic model of RTK positioning, each SD ambiguity is modeled as a random walk with a noise spectral density of \({10}^{-8} {\mathrm{cycle}}^{2}/\mathrm{s}\), whereas the rover position is assumed to follow an integrated random walk process. This small spectral density enables the variation in SD ambiguity independent of time. Due to the uncorrelated property of SD ambiguity among satellites, covariance values of all SD ambiguities in the process noise matrix are set to zero. Prior to the next step, updated states and their VC matrix in (4) and (5) are converted to the DD form according to Takasu and Yasuda (2009) as

$$\varvec{\hat{x}}_{{n|n}}^{\prime } = {\text{ }}\varvec{G\hat{x}}_{{n|n}} = \left( {\underbrace {{r_{x} ,r_{y} ,r_{z} ,\dot{r}_{x} ,\dot{r}_{y} ,\dot{r}_{z} }}_{{\varvec{\hat{b}}}},\underbrace {{\varvec{N}_{{rb}}^{{ij}} }}_{{\varvec{\hat{a}}}}} \right)^{T}$$
$$\varvec{P}_{{n|n}}^{\prime } = {\text{ }}\varvec{GP}_{{n|n}} \varvec{G}^{T} = \left( {\begin{array}{*{20}l} {\varvec{Q}_{{\varvec{\hat{b}}}} } \hfill & {\varvec{Q}_{{\varvec{\hat{b}\hat{a}}}} } \hfill \\ {\varvec{Q}_{{\varvec{\hat{a}\hat{b}}}} } \hfill & {\varvec{Q}_{{\varvec{\hat{a}}}} } \hfill \\ \end{array} } \right)$$
(24)
$${\varvec{G}} = \left( {\begin{array}{*{20}l} {{\varvec{I}}_{6} } \hfill & \varvec{0} \hfill \\ \varvec{0} \hfill & {\varvec{D}} \hfill \\ \end{array} } \right)$$

where \({\left(\cdot \right)}^{ij}\) and \({\left(\cdot \right)}_{rb}\) denote the SD form between satellites and between both receivers, respectively; \({\varvec{G}}\) is a transition matrix containing the SD matrix \({\varvec{D}}\) to convert the SD ambiguity to DD ambiguity; \({{\varvec{N}}}_{rb}^{ij}\) represents the DD carrier-phase ambiguity.

In order to yield the solution for Integer Least-Squares (ILS) ambiguities \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a}\), the Least-squares AMBiguity Decorrelation Adjustment (LAMBDA) method (Teunissen 1993; 1994; 1995) is employed to solve the minimization problem as follows

$$q\left( \varvec{a} \right) = \min \limits_{\varvec{a}} \left( {\varvec{\hat{a}} - \varvec{a}} \right)^{T} \varvec{Q}_{\hat{{\varvec{a}}}}^{{ - 1}} \left( {\varvec{\hat{a}} - \varvec{a}} \right),\quad{{\varvec{a}}}\in {\mathbb{Z}}^{n}$$
(25)

where the float ambiguity estimate \(\hat{\varvec{a}}\) is assumed to be normally distributed with integer mean \({\varvec{a}}\) and VC matrix \({\varvec{Q}}_{{\hat{\varvec{a}}}}\).

Given that it has been demonstrated that the LAMBDA method effectively resolves the ambiguities, this contribution focuses solely on ambiguity validation techniques. An ambiguity validation test is employed to determine whether the integer ambiguity estimate \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a}\) is acceptable or not. Once the best integer candidate set is identified with the corresponding critical value, state parameters other than float ambiguities can be updated by using fixed integer ambiguities.

Both, the F-ratio and the W-ratio, use the squared distance between the float solution and integer candidates as a criteria. The F-ratio as defined by Counselman III and Abbot (1989); Teunissen and Odijk (1997); Teunissen and Verhagen (2009) and expressed as

$$F = \frac{{q\left( {\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }^{\prime } } \right)}}{{q\left( {\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }} \right)}}$$
(26)

is traditionally the method of choice. Thereby, the numerator \(q\left( {\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }^{\prime } } \right)\) and the denominator \(q\left( {\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }} \right)\) represent the squared distance from the float solution to the second-best integer solution, and to the best integer solution, respectively. Noting that both quadratic forms are not independent, the F-ratio cannot be considered to follow a Fisher distribution (Teunissen 1998; Wang et al. 1998; Teunissen and Verhagen 2009). An alternative validation procedure utilizes the W-ratio to assess the likelihood of the best ambiguity relative to the second-best combination (Wang et al. 1998).

$$\begin{array}{*{20}c} {W = \frac{d}{{\sqrt {{\text{Var}}\left( d \right)} }}} \\ \end{array}$$
(27)

where \(d = q\left( {\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }^{\prime } } \right) - q\left( {\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }} \right)\) denotes the difference between the second minimum and minimum quadratic form of least-square residuals; \({\text{Var}}\left(d\right)\) is the estimated variance of \(d\). If a critical value, later referred to as \(c\), does not exceed the test statistic \(W\), the likelihood of the best ambiguity combination is statistically significantly greater than that of the second-best one.

Success probability of ambiguity resolution

A ratio test is applied to determine whether the float solution is close to its nearest integer vector, rather than whether the ILS solution is correct (Teunissen and Verhagen 2009). A critical value c is used to decide whether \(q\left({\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }^{\prime } } \right)\) significantly exceeds \(q\left( {\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }} \right)\). If so (i.e., if the ratio of (26) or (27) is not less than \(c\)), the ILS solution \(\varvec{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} }\) is acceptable.

Generally, the threshold value \(\mathrm{T}=1/c\) can be determined using look-up tables based on the pre-defined failure rate \({P}_{f}\) and the calculated ILS failure rate \({P}_{f,{\text{ILS}}}\) (1 minus the ILS success rate). Compared to a single fixed value for \(c\), this feasible method has been shown to be more effective (Teunissen and Verhagen 2009). Due to the complexity of the geometry of the ILS pull-in region, the exact ILS success rate \({P}_{s,{\text{ILS}}}\) cannot be calculated (Verhagen and Teunissen 2006). To provide a lowerbound for \({P}_{s,{\text{ILS}}}\), the success probability of the bootstrapped integer estimation using decorrelated ambiguities is given according to Teunissen (1998) as

$$\begin{array}{*{20}c} {P_{{s,{\text{B}}}} \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} = a} \right) = \mathop \prod \limits_{i = 1}^{n} \left( {2{\Phi }\left( {\frac{1}{{2\sigma_{{\hat{a}_{i|I} }} }}} \right) - 1} \right)} \\ \end{array}$$
(28)

where \(\Phi \left(x\right)\) denotes the integral of the standard normal probability density function from minus infinity to \(x\); \({\sigma }_{{\widehat{a}}_{i|I}}\) is the conditional standard deviation of the i-th ambiguity. In the following data processing, the pre-defined failure rate \({P}_{f}\) is set to \(0.01\).

The Ambiguity Dilution of Precision (ADOP), which measures the average precision of ambiguities, can also be used to provide an upperbound for the success rate of integer bootstrapping (Teunissen 1997, 1998; Teunissen and Odijk 1997; Odijk and Teunissen 2008)

$$\begin{array}{*{20}c} {{\text{ADOP}} = \sqrt {{\text{det}}\left( {{\varvec{Q}}_{{\varvec{a}}} } \right)}^{\frac{1}{n}} \quad {\text{[cycle]}}} \\ \end{array}$$
(29)
$$\begin{array}{*{20}c} {P_{{s,{\text{B}}}} \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{a} = a} \right) \le \left( {2\Phi \left( {\frac{1}{{2{\text{ADOP}}}}} \right) - 1} \right)^{n} } \\ \end{array}$$
(30)

where \({\text{det}}\left({{\varvec{Q}}}_{{\varvec{a}}}\right)\) is the determinant of the n-dimensional float ambiguity VC matrix, and its scalar depends on the variance and covariance of ambiguities.

A novel stochastic model with robust Kalman filtering

The Kalman filter is an optimal estimator, which operates recursively based on the functional and stochastic models in a linear dynamic system. The functional models consist of the dynamic model

$$\begin{array}{*{20}c} {{\varvec{x}}_{n} = {{\varvec{\Phi}}}_{n|n - 1} {\varvec{x}}_{n - 1} + {\varvec{u}}_{n} ,} \\ \end{array}$$
(31)

and the measurement model

$$\begin{array}{*{20}c} {{\varvec{z}}_{n} = {\varvec{H}}_{n} {\varvec{x}}_{n} + {\varvec{v}}_{n} .} \\ \end{array}$$
(32)

where \({{\varvec{x}}}_{n}\) is the state vector at epoch \(n\); \({{\varvec{\Phi}}}_{n\mid n-1}\) denotes the state transition matrix; \({{\varvec{u}}}_{n}\) is the random error vector; \({{\varvec{z}}}_{n}\) is the vector of measurements at epoch \(n\); \({{\varvec{H}}}_{n}\) represents the design matrix to describe the correlation between the measurements and the state vector; \({{\varvec{v}}}_{n}\) is the measurement noise vector.

The corresponding stochastic models are assumed as (Wang 1999)

$$\begin{array}{*{20}c} {E\left\{ {{\varvec{u}}_{n} {\varvec{u}}_{i}^{T} } \right\} = \left\{ {\begin{array}{*{20}c} {{\varvec{Q}}_{n} , \quad i = n} \\ {\varvec{0} ,\quad i \ne n} \\ \end{array} } \right.} \\ \end{array}$$
(33)
$$\begin{array}{*{20}c} {E\left\{ {{\varvec{v}}_{n} {\varvec{v}}_{i}^{T} } \right\} = \left\{ {\begin{array}{*{20}c} {{\varvec{R}}_{n} ,\quad i = n} \\ {\varvec{0},\quad i \ne n} \\ \end{array} } \right.} \\ \end{array}$$
(34)
$$\begin{array}{*{20}c} {E\left\{ {{\varvec{v}}_{n} {\varvec{u}}_{i}^{T} } \right\} = \varvec{0}.} \\ \end{array}$$
(35)

where \(\mathrm{E}\left\{\cdot \right\}\) denotes the expectation function; \({{\varvec{v}}}_{n}\) and \({{\varvec{u}}}_{n}\) are two mutually uncorrelated noises. Either the measurement noise \({{\varvec{v}}}_{n}\) or process noise \({{\varvec{u}}}_{n}\) follows a zero-mean Gaussian distribution; \({{\varvec{R}}}_{n}\) and \({{\varvec{Q}}}_{n}\) are the corresponding VC matrices. In this paper, the process noise VC matrix \({{\varvec{Q}}}_{n}\) is completely known, and we only concern the stochastic modeling for measurements.

The optimality Kalman filter can be hold only when the process and measurement noises are Gaussian distributed (Simon 2006). In other words, the estimate variance, which is the main diagonal value of the VC matrix of update states

$$\begin{array}{*{20}c} {{\varvec{P}}_{n|n} = E\left\{ {{\varvec{e}}_{n} {\varvec{e}}_{n}^{T} } \right\} = E\left\{ {\left( {{\varvec{x}}_{n} - \hat{\varvec{x}}_{n|n} } \right)\left( {{\varvec{x}}_{n} - \hat{\varvec{x}}_{n|n} } \right)^{T} } \right\},} \\ \end{array}$$
(36)

can be minimized by the Kalman gain \({{\varvec{K}}}_{n}\), when there are no outliers in dynamic and measurement models. However, the functional models are prone to be influenced by the non-Gaussian distributed outliers in practice. Even applying the estimated measurement VC matrix \({\widehat{{\varvec{R}}}}_{n}\), the adaptive Kalman filter is still hard to resist the outliers and show the robustness. Thus, we improved this adaptation process by utilizing the IGG III equivalent weight method to the integrated VC matrix \({{\varvec{C}}}_{{{\varvec{l}}}_{n}}\) in (16).

Generally, based on the least-squares (LS) Bayesian estimation and the principle of M (maximum likelihood-type) estimates, there are three robust estimators: M-LS, LS-M and M-M. The interested reader is directed to Yang (1991) for more details. Differing from the M-LS filter, the measurement noise VC matrix in proposed filtering is based on a Gauss–Markov model directly estimated, rather than changed by reduction factors of the weight elements. This inherits the characteristics of the adaptative Kalman filter studied by Wang et al. (1998). To improve the robust resistance to the contaminated measurements or the predicted states by outliers, the robust M estimation is firstly combined with the adaptive Kalman filter formulated with Gauss–Markov model. Compared to that the LS-M or M-M filter utilizes the equivalent weights of the state parameters to resist model errors, we simultaneously use a posteriori residuals of both measurements and state predictions to reweight the updated measurements and predicted states, respectively, so that the adaptively estimated measurement noise VC matrix can contain more information and hence enhance the filter robustness. The flowchart of the proposed filter compared to the conventional and adaptive Kalman filter is shown in Fig. 2.

Fig. 2
figure 2

Filter flowchart with three different measurement noise VC matrices

Experiments and results

To access the performance of the stochastic model with robust Kalman filtering in a challenging environment, where the number of observed satellites is limited, and measurements can be randomly contaminated or interrupted, a kinematic experiment was conducted in an urban area. The data set was collected on March 25, 2022, using two Trimble receivers sampling at 10 Hz with Trimble Zephyr 3 Geodetic antennas. One of the receivers (Trimble NetR9) was configured as a rover, with its antenna mounted on top of an experimental vehicle, while the other receiver (Trimble Alloy) and its antenna were installed at a site approximately 1 km from the initial drive location. The experimental setup is depicted in Fig. 3.

Fig. 3
figure 3

Experimental configuration (left column): setup of the experimental vehicle (top panel) and test components for base receiver and antenna (bottom panel); skyplot of the observed GPS (right column)

In the subsequent analysis, pseudorange and carrier-phase measurements only on the GPS L1 frequency are utilized. The elevation cutoff angle is set to \({5}^{\circ }\). As depicted in Fig. 3, five satellites (G02, G04, G05, G16, and G20) have elevation angles lower than \({20}^{\circ }\), which means that during the experiment, their signals are more likely to be disrupted or contaminated by building blockages or reflections. Nevertheless, constantly adjusting the cut-off angle to exclude satellites with low elevation is not the optimal strategy. Even though multipath errors can be avoided by excluding satellites, such a rejection is not recommended since there are only a few GPS single-frequency observations in urban environments, and a reduction in the number of observations would result in a high value of Position Dilution of Precision (PDOP) and ADOP. Therefore, to detect and identify outliers, we employ the IGG III method based on the integrated standardized residuals of measurements and predicted states.

In order to demonstrate the benefit of this approach, three computation schemes are tested and compared against each other. Since F-ratio and W-ratio tests determine the acceptance of integer ambiguity estimates differently, estimated positions and velocities could also be different. To evaluate any processing scheme, the three schemes increase effectively to six, as described in Table 1.

Table 1 Overview of six schemes

Test case 1—GNSS simulator

Since the intention of this work is not only to demonstrate the reliability and robustness of RTK positioning, but also the accuracy of the estimated position, one needs an error-free reference trajectory for validation of results. Using the feature of the Orolia Skydel GNSS simulator to log RINEX data without connecting a GNSS receiver, it was possible to simulate error-free code- and carrier-phase measurements on GPS L1 frequency with an update rate of 1 Hz based on the user-defined trajectory as shown in Fig. 4.

Fig. 4
figure 4

Simulation test: top view of the trajectory with marked locations of artificial outliers

A constellation with five observed GPS satellites (G18, G25, G26, G29, and G31) yields an almost constant PDOP of \(1.6\), which corresponds to the PDOP value of the actual drive test described in the next section. Within the simulation, G29 has the highest elevation angle, which turns it into the master satellite for RTK analysis. Since a minimum window width of 110 epochs is chosen for the actual dataset at 10 Hz in the next section, for the simulated dataset at 1 Hz, an estimation window width of 11 epochs is selected.

To consider the noise characteristics of the GNSS observations, zero-mean normally distributed noise with a variance of \({\sigma }_{L}^{2}\) according to (9) is generated and added to the carrier- and multiplied by \({100}^{2}\) to the code-phase measurements, respectively. Additionally, outliers are artificially introduced into the observations every 50 epochs, as depicted in Fig. 4, where red dots represent outlier locations. These outliers follow a normal distribution, which has the mean value shown in Table 2 and three different standard deviations of 5 cycles (about 0.95 m), 10 cycles (about 1.90 m), and 15 cycles (about 2.85 m). Thus, three different severity levels of outliers can be studied. Table 2 summarizes all settings used to study the impact on different processing schemes.

Table 2 Overview of the outlier setting

According to Table 3, the fixing rate of ambiguity resolution with the \({{\varvec{R}}}_{e}\)-F scheme has a constant value of 93.90% for all three severity levels of outliers, whereas the \(\widehat{{\varvec{R}}}\)-F and the \({\widehat{{\varvec{R}}}}_{\mathrm{IGGIII}}\)-F schemes have a higher fixing rate of 96.67%. Thus, F-ratio test seems to perform almost similar for any level of outliers. In contrary, the W-ratio test leads only to fixing rates of 55.27% and 51.94% for severity levels 1 and 3 (cf. Table 3), when using the \({{\varvec{R}}}_{e}\)-W scheme. Moreover, one notices that the \({\widehat{{\varvec{R}}}}_{\mathrm{IGGIII}}\) scheme performs best for any severity level of outliers, which reveals that it might be beneficial to consider the historical dynamic information by using the estimated measurement noise matrix \({\varvec{R}}\).

Table 3 RMS of position estimation errors and the fixing rate

Outliers impact the Root Mean Square (RMS) errors of position estimates for the \({{\varvec{R}}}_{e}\) and \(\widehat{{\varvec{R}}}\) schemes, independent of the ratio test, by 1.0 to 3.0 m, as shown in Table 3. Application of the \({\widehat{{\varvec{R}}}}_{IGGIII}\) scheme significantly improves the accuracy of the position estimates. The corresponding RMS values of these errors for three severity levels of outliers are below 0.9 m. As depicted in Fig. 5, vertical errors (\(\mathrm{\Delta U}\)) are larger than horizontal ones, i.e., \(\mathrm{\Delta E}\) and \(\mathrm{\Delta N}\), especially when using the \({{\varvec{R}}}_{e}\) and \(\widehat{{\varvec{R}}}\) schemes for any simulated level of outlier severity. This is confirmed by studying the histograms of the position differences with respect to the true values and their mean and standard deviation illustrated as normal distribution plots. Hence, it can be concluded that the \({\widehat{{\varvec{R}}}}_{IGGIII}\) scheme has the potential to enhance the robustness of the filter against outliers.

Fig. 5
figure 5

Positioning errors and their error probability distribution curves and histograms for severity level 1 (top panel); severity level 2 (middle panel); severity level 3 (bottom panel) based on the F-ratio test (left column) and W-ratio test (right column), respectively

Test case 2—drive test

Since the simulation test was conducted with a perfect observation scenario that was artificially degraded, an outdoor test was additionally performed in a real urban area along the trajectory depicted in Fig. 6 in order to evaluate the six processing schemes described in Table 1. This trajectory includes the actual vehicle’s dynamics, as well as the discontinuous motion caused by traffic lights and contains several sharp turns and near-straight-line motion in residential and commercial areas. In this environment, measurements are susceptible to contamination and interruption due to signal reflection and building blockage, particularly for carrier-phases, which are more sensitive to environmental variations.

Fig. 6
figure 6

Realistic test: top view of the trajectory (top panel); The number of visible satellites with PDOP time series (bottom panel)

In contrast with the almost constant PDOP value in the simulated test, the PDOP value in Fig. 6 is clearly affected by variations in satellite geometry. There are even epochs for which the number of satellites with available pseudorange and carrier-phase measurements is lower than four (green circle in Fig. 6). As summarized in Table 4, carrier-phase outages can still occur even if a signal is received from a satellite at high elevation. Notably, in most epochs, G29 is chosen as the master satellite; however, from epochs 1258 to 1264, the master satellite is set to G31 due to carrier-phase outages with G29.

Table 4 Variations in the received measurements and the corresponding PDOP values at epochs from 1230 to 1267

In order to evaluate the effect of estimation window width on filter performance, multiple window widths are utilized in this scenario. It has been demonstrated that the optimal window size is highly dependent on trajectory dynamics, such as turns. As pointed out by Mohamed and Schwarz (1999), a trade-off should be considered when adjusting the window width. Since the collected dataset has a sampling frequency of 10 Hz with a maximum of 10 satellites per epoch, a minimum window width of (10) × (10 + 1) = 110 epochs is selected for the subsequent analysis.

Table 5 provides the fixing rates of ambiguity resolution when applying the six schemes expressed in Table 1. Based on these studied window sizes, it is evident that the W-ratio test in adaptation of \({\varvec{R}}\) with or without the IGG III method, which has always fixing rates exceeding 82%, should be preferred to the F-ratio test. In addition, the biases between the scheme \({{\varvec{R}}}_{e}\) and both schemes of \(\widehat{{\varvec{R}}}\) or \({\widehat{{\varvec{R}}}}_{\mathrm{IGGIII}}\) become small when the moving window width exceeds half the total epochs (about 2600 epochs). In this case, a nearly identical fixing rate of 54% is provided by using the F-ratio test. One notes that when the estimation window size is increased, a significant amount of dynamic information is actually smoothed out, which may diminish the effectiveness of the estimation noise VC matrix.

Table 5 Ambiguity resolution success rate using different widths of moving window

Furthermore, in the \(\widehat{{\varvec{R}}}\) scheme, within a window width range of 110 to 140, the W-ratio test can achieve ambiguity resolution success rates of over 97%, whereas the F-ratio test yields only success rates of about 40%. As for the \({{\varvec{R}}}_{e}\) scheme, both tests lead to smaller success rates, indicating that the W-ratio test is still preferable. After employing the IGG III method, which has the potential to consider outliers by reducing the weight of improperly predicted states, more integer ambiguities are resolved. In this case, the fixing rates based on the F-ratio test in adaptation of \({\varvec{R}}\) increase to 89% and W-ratio test rates increase slightly.

For subsequent evaluation, an estimation window size of 110 epochs is used as default. Figure 7 depicts vehicle trajectories based on the six schemes described in Table 1. The enlarged regions in these figures represent the sections where carrier-phase outages occur, and during which the signals from low-elevation satellites were predominantly blocked by buildings, resulting in a number of less than four satellites.

Fig. 7
figure 7

© OpenStreetMap contributors, CC-BY-SA

Trajectory for GPS single-frequency (L1) RTK positioning test in the scheme of \({{\varvec{R}}}_{e}\) (top panel), \(\widehat{{\varvec{R}}}\) (middle panel) and \({\widehat{{\varvec{R}}}}_{\mathrm{IGGIII}}\) (bottom panel) using the F-ratio test (left column) and the W-ratio test (right column); map source

Since the drive test was conducted in a complex environment, it is generally extremely challenging to provide an accurate reference trajectory to which results can be compared. On the other hand, a posteriori errors are influenced to some extent by a priori errors, which are set by the user, and which may vary across applications. Furthermore, the dynamics cannot be accurately predicted if the Kalman filter suffers from discontinuities. Even if a posteriori standard deviations of estimated states are small, ambiguity fixing may fail due to uncontrolled dynamic model biases in filtering. Therefore, we use the ambiguity fixing rate rather than the formal error from a posteriori VC matrix of states to validate the effectiveness of the proposed schemes. The Up component of the local East-North-Up (ENU) system is depicted in Fig. 8. Discontinuities in the Up-component time series can be clearly assigned to the epochs listed in Table 4.

Fig. 8
figure 8

Local Up-coordinates using the F-ratio test (top panel) and the W-ratio test (bottom panel)

As depicted in Fig. 8, the estimated vertical coordinates deviate significantly, especially for the \({{\varvec{R}}}_{e}\) scheme. The primary cause is the discontinuity in Kalman filtering due to carrier-phase outages and signal blockages by buildings. Since the estimated measurement noise VC matrix uses the moving window to collect historical measurement residuals, this discontinuity changes the Up component by no more than 1 m. In this situation, the W-ratio test performs more reliably than the F-ratio test. With or without the IGG III method in adapting the \({\varvec{R}}\) matrix, ambiguity success rates by using the W-ratio test are larger than 98%. Meanwhile, Up-coordinates fluctuate slightly under these extreme observation conditions. However, the F-ratio test cannot demonstrate such robustness without the IGG III method. According to Table 5, the \(\widehat{{\varvec{R}}}\) scheme has an approximate 40% fixing rate based on the F-ratio test, which is even lower than the success rate of ambiguity resolution in the \({{\varvec{R}}}_{e}\) scheme.

In addition, as depicted in Fig. 7, discontinuity periods cause float solutions for short time. The most likely explanation for this is the persistence of contaminated measurements and model errors. However, there is no improvement when using the IGG III method to detect only standardized residuals of measurements. This suggests that model errors contaminate state parameters, rendering resolved ambiguities in subsequent epochs incapable of passing the validation test. Thus, the IGG III method is used to control standardized residuals of the predicted position and velocity, so that more dynamic information can be transferred to the estimation of \({\varvec{R}}\) by the adjusted integrated VC matrix \({{\varvec{C}}}_{{{\varvec{l}}}_{n}}\). Based on the F-ratio test, the fixing rate of the ambiguity resolution in the scheme of \({\widehat{{\varvec{R}}}}_{\mathrm{IGGIII}}\) is close to 90%.

According to Fig. 9, the estimated standard deviation for all satellite pairs fluctuates rapidly. The high-frequency dynamic information provided by the estimation window is one reason for this. When using the \({{\varvec{R}}}_{e}\) scheme, significantly less fluctuations are occurring. The efficiency of robust filtering may suffer if the filtering process is degraded by improperly predicted states. To decrease the probability of model errors, the integrated VC matrix \({{\varvec{C}}}_{{{\varvec{l}}}_{n}}\) is adjusted and transfers more information to the estimated \({\varvec{R}}\). Thus, more dispersed scatter points occur for the IGG III-scheme.

Fig. 9
figure 9

Standard deviations (STD) of DD pseudorange measurements for satellite pair in the GPS single-frequency data using the W-ratio test

At epoch 174, F- and W-ratio tests provide an identical performance for all three processing schemes. As illustrated in Fig. 10, the estimated covariance, i.e., the elements of the off-diagonals of the covariance matrix, could be negative, whereas the \({{\varvec{R}}}_{e}\) covariances are always positive due to the elevation-dependent function and the error propagation law. However, such negatively estimated covariances are more realistic for DD positioning. In addition, after the adaptive estimation of \({\varvec{R}}\) with the modified matrix \({{\varvec{C}}}_{{{\varvec{l}}}_{n}}\), correlation coefficients between satellite pairs are also updated, making them distinct from those in the \(\widehat{{\varvec{R}}}\) scheme. In the outlier detection procedure, DD measurements from two satellite pairs G29-G18 and G29-G25 are identified as outliers since their respective standardized residuals exceed the threshold \({c}_{1}\). In accordance with the zero-weight segment of the IGG III method, their DD measurements should be disregarded. However, to ensure that the intrinsic correlation of measurements is preserved, outlier exclusion is inappropriate for the inverse transformation between the VC matrix and the weight matrix, particularly when predicted states are involved. The reduction factor for weight elements is then chosen as \({10}^{-5}\) rather than 0, leaving the number of satellite pairs unchanged.

Fig. 10
figure 10

Measurement noise VC matrix for DD GPS pseudorange measurements at epoch 176 in the scheme of \({{\varvec{R}}}_{e}\) (left column), \(\widehat{{\varvec{R}}}\) (middle column) and \({\widehat{{\varvec{R}}}}_{\mathrm{IGGIII}}\) (right column)

The ADOP value is related to the average precision of estimated ambiguities. Figure 11 depicts ADOP values for the estimated measurement VC matrix, which are significantly lower than those of the \({{\varvec{R}}}_{e}\) scheme. Utilizing the IGG III method enables a further reduction in ADOP values, particularly for the F-ratio test. In adapting the \({\varvec{R}}\) matrix, the filtering process with the IGG III method based on the F-ratio test allows the generation of more reliable ambiguity estimates, thereby increasing the ambiguity success rate by 49%, whereas the W-ratio test along with a fixing rate larger than 98% reveals almost no distinction in terms of ADOP value independent of the use of the IGG III method.

Fig. 11
figure 11

ADOP time series using the F-ratio test (top panel) and the W-ratio test (bottom panel)

Experiments and Results

Adaptation of the measurement noise VC matrix \({\varvec{R}}\) takes into account the averaging of integrated innovation sequences based on the Gauss–Markov model. Despite the fact that the minimum size of the moving estimation window is constrained by the number of update measurements, the optimal choice of window size still depends on the trajectory dynamics. In the conducted tests, there are multiple turns in residential and commercial areas in addition to the discontinuous motion resulting from traffic conditions, like stops at traffic lights.

Simulations with artificially introduced outliers reveal that \({\widehat{{\varvec{R}}}}_{\mathbf{I}\mathbf{G}\mathbf{G}\mathbf{I}\mathbf{I}\mathbf{I}}\)-F and \({\widehat{{\varvec{R}}}}_{\mathbf{I}\mathbf{G}\mathbf{G}\mathbf{I}\mathbf{I}\mathbf{I}}\)-W schemes outperform any other schemes in terms of positioning accuracy as well as ambiguity resolution rate. In addition, tests with real data show that those two processing schemes have higher fixing rates as well. Moreover, the \(\widehat{{\varvec{R}}}\) scheme with an appropriate width of 110 epochs based on the W-ratio test leads to more integer ambiguity resolutions with an ambiguity resolution success rate of 98.88%. This is approximately 20% higher than the \({{\varvec{R}}}_{{\varvec{e}}}\) scheme, which has a success rate of 77.86%. For processing schemes based on the F-ratio test without the IGG III method, there is a lack of robustness, whereas for processing schemes based on the F-ratio test with the IGG III method the success rate of ambiguity resolution is increased to 89.30%.

In residential and commercial areas, satellite signals are most likely reflected and blocked by buildings or interrupted by carrier-phase outages. This can restrict the number of satellites observed. In this case, if the IGG III method is applied, once the standardized residual of a contaminated measurement exceeds the threshold \({c}_{1}\), the corresponding weight reduction factor is set to zero, resulting in deteriorated satellite geometry. Thus, an improved IGG III method in conjunction with the adaptation of \({\varvec{R}}\) is employed to enhance the reliability and robustness of the Kalman filter. Additionally, outliers of the predicted position and velocity should also be controlled to avoid potential model errors. After adjusting the integrated VC matrix \({{\varvec{C}}}_{{{\varvec{l}}}_{{\varvec{n}}}}\) with standardized residuals of measurements and predicted states, the estimated measurement noise VC matrix can contain information about filter predictions as well as historical dynamics. Consequently, more reliable fixed and float ambiguities can be generated in the Kalman filter iteration.

Conclusions

To improve the positioning accuracy and the success rate of the ambiguity resolution, the adaptively estimated measurement noise VC matrix \({\varvec{R}}\) is applied, which is used for the proposed stochastic model with robust Kalman filtering. When applying perfect GPS signals without any errors, the estimated \({\varvec{R}}\) matrix does not differ from the elevation-dependent one. In kinematic applications, when the environment is complex and the satellite geometry and trajectory dynamics vary rapidly, the elevation-dependent \({\varvec{R}}\) matrix might not be always suitable, and hence, the estimated \({\varvec{R}}\) matrix is preferred. Meanwhile, the optimal choice of the estimation window width, which depends on the dynamics and the number of measurements, should be selected properly and is subject to further studies.

Although the estimated \({\varvec{R}}\) matrix allows to better consider rapid variations in the environment, there are still model errors in the state prediction, in particular during epochs with limited-view or carrier-phase outages. For such situations, the IGG III method leads to higher robustness, better accuracy, and higher ambiguity fixing rates as shown.

However, it should be noted that outliers are simulated based on normal distributions throughout this paper. If there are non-normally distributed errors remaining in measurements, the proposed method might be less robust than shown in the test cases here. Besides, in the case of non-optimal thresholds \({c}_{0}\) and \({c}_{1}\), it is possible that outlier-free measurements are incorrectly identified as outliers, or correctly predicted states are identified as being wrong. Thus, the performance of the robust Kalman filter does not improve or will be even degraded in such case. Depending on the quality of the measurements, the dynamic trajectory, the multipath environment, the frequency of cycle slips, and other influencing factors, the thresholds \({c}_{0}\) and \({c}_{1}\) might differ from ones proposed here. Thus, further studies are required to provide those parameters for particular scenarios in urban environments. Moreover, the usage of inertial sensors and their tight integration into RTK processing scheme would be highly beneficial for epochs of carrier-phase outages. Additionally, for safety critical applications in real time, one needs to derive a functional error bound that relates to the true value, and thus provide integrity information. In addition, the ambiguity resolution rate requires further investigation concerning the aspect of integrity.