Keywords

1 Introduction

In PPP-RTK, one employs state-space representation for positioning corrections so as to reduce their transmission rate, i.e. the frequency with which the corrections are to be provided to single-receiver GNSS users (Wubbena et al 2005; Laurichesse and Mercier 2007; Collins et al 2010; Teunissen et al 2010). However, a reduction in the transmission rate comes at the cost of delivering time-delayed corrections. The user is therefore required to time-predict the corrections so as to bridge the gap between the corrections’ generation time and the user positioning time. Consequently, next to the intrinsic uncertainty brought by the randomness of GNSS measurements, ‘multi-epoch’ PPP-RTK corrections also inherit extra uncertainty that is associated with their time-prediction (Wang et al 2017).

As the user’s Kalman filter relies on the provision of such random positioning corrections, his corrected observation equations become correlated in time. This violates the Kalman filter’s key assumption, namely, that the input measurements must be time-uncorrelated. As a consequence, the user’s Kalman filter loses its minimum-variance optimality property.

In this contribution we aim to identify the main factor that makes the stochastic model of the PPP-RTK user-filter misspecified, and thereby address how the user can limit the precision-loss associated with his parameter solutions. By developing tools for measuring the stated precision-loss under existing formulations of the user’s Kalman filter, alternative multi-epoch formulations are developed that can recursively deliver close-to-minimum-variance filtered solutions of the user parameters. To bound the corresponding precision-loss experienced by the filtered solutions of such formulations, certain conditions must hold. These conditions are discussed, and their impact on the user ambiguity-resolved positioning performance is illustrated by supporting numerical results.

2 User Model Aided by External Corrections

Consider the (linearized) system of observation equations of a single-receiver PPP-RTK user

$$\displaystyle \begin{aligned} {} \underline{u} = B\,b + C\,c+\underline{n}\,, \end{aligned} $$
(1)

where the user observation vector \( \underline {u}\), together with the zero-mean random noise \( \underline {n}\), are linked to the user’s unknown parameter vector b and the unknown correction vector c through the full-rank design matrices B and C. The augmented design matrix \([B,C]\) is rank-defect though, meaning that the system is not solvable for both b and c. The observation vector \( \underline {u}\) may contain GNSS carrier-phase and pseudorange (code) measurements, with b containing the position coordinates, carrier-phase ambiguities, receiver clock parameters, and instrumental biases. On the other hand, the correction vector c may contain estimable forms of satellite orbit and clock parameters, atmospheric parameters, and phase/code biases (Leick et al 2015; Odijk et al 2015; Teunissen and Montenbruck 2017). The underscore symbol indicates the ‘randomness’ of quantities.

Due to the rank-deficiency of \([B,C]\) in (1), the user cannot unbiasedly determine the unknown parameters b with the sole use of his measurements. To obtain b unbiasedly, the user has to take recourse to an external provider, e.g., a network of permanent GNSS stations (Wubbena et al 2005), to receive an unbiased solution of the correction vector c. Let \( \underline {\hat {c}}\) denote such external correction solution. With the provision of \( \underline {\hat {c}}\), the user can extend his model (1) to

$$\displaystyle \begin{aligned} {} \left[\begin{array}{c}\underline{u}\\ \underline{\hat{c}} \end{array}\right] = \left[\begin{array}{cc}B & C\\ 0 & I \end{array}\right]\,\left[\begin{array}{c}b\\ c \end{array}\right]+\left[\begin{array}{c}\underline{n}\\ \underline{\epsilon} \end{array}\right]\,, \end{aligned} $$
(2)

with \( \underline {\epsilon }\) being the zero-mean random noise vector that characterises the ‘randomness’ of the correction solution \( \underline {\hat {c}}\). Since the user design-matrix B is of full-column rank, and that the correction vector c can now be determined by \( \underline {\hat {c}}\), the system (2) is solvable. As far as the estimation of the user parameters b is concerned, the system of equations (2) can be reduced for c. Such reduced model is formed by pre-multiplying the matrix \([I,-C]\) with (2). This gives

$$\displaystyle \begin{aligned} {} \underline{u} - C\,\underline{\hat{c}} = B\,b +\tilde{\underline{n}}\,,\quad \text{with}\quad \tilde{\underline{n}}:=\underline{n}- C\,\underline{\epsilon} \end{aligned} $$
(3)

The reduced model (3), with the user corrected observation vector \( \underline {u} -C \underline {\hat {c}}\), forms the basis of existing PPP-RTK models (Wubbena et al 2005; Laurichesse and Mercier 2007; Collins et al 2010; Teunissen et al 2010). In contrast to the model (2) where both b and c are jointly estimated, (3) does not directly allow a further update on the correction solution \( \underline {\hat {c}}\). From the perspective of a single-receiver user who is merely interested in his parameters b, the reduced model (3) is more appealing in the sense that it involves fewer unknowns. In fact, the reduced model (3) can be shown to deliver user parameter solutions that are identical to those of (2) if the (co)variance propagation law to the corrected observation vector \( \underline {u} -C \underline {\hat {c}}\) is properly applied (Teunissen 2000). This means all the information required for the estimation of b is preserved when the user weights the corrected observation vector \( \underline {u} -C \underline {\hat {c}}\) in accordance with the inverse-variance matrix of the noise vector \(\tilde { \underline {n}}= \underline {n}- C\, \underline {\epsilon }\). In practice however, the variance matrix of the correction solution \( \underline {\hat {c}}\), i.e. the dispersion of \( \underline {\epsilon }\) in \(\tilde { \underline {n}}\), may only be partially known to the user. As a consequence, the user takes recourse to the known part of such variance matrix to weight his corrected observation vector \( \underline {u} -C \underline {\hat {c}}\), missing part of the required information, thereby experiencing precision-loss in the estimation of b. The following theorem provides a general means for measuring such precision-loss.

Theorem (\(\lambda \)-Suboptimality)

Let the zero-mean random vector \( \underline {p}\), with the full-column rank matrix L, perturb the system of observation equations

$$\displaystyle \begin{aligned} {} \underline{y} = A\,x +\underline{e}+L\,\underline{p}, \end{aligned} $$
(4)

in which the observation vector \( \underline {y}\), with its zero-mean residual vector \( \underline {e}\), is linked to the unknown parameter vector x by the full-column rank design matrix A. Also, let the variance matrix of \( \underline {e}\) be given by the positive-definite matrix \(Q_e\). In the absence of the variance matrix of \( \underline {p}\), say \(Q_{p}\), the least-squares estimator

$$\displaystyle \begin{aligned} {} \underline{\hat{x}} = A^{+}\,\underline{y},\quad \text{with}\quad A^{+} := (A^TQ_e^{-1}A)^{-1}A^TQ_e^{-1}, \end{aligned} $$
(5)

is not minimum-variance, and therefore, suboptimal. Its precision-loss, in estimating every function \(\theta =f^Tx\), can be measured by the following variance-ratio bounds

$$\displaystyle \begin{aligned}{} 1+\lambda_{\mathrm{min}}(M_{{L_A}}M_{{L\!_{A^\bot}}})\leq & \dfrac{\text{Var}(f^T\underline{\hat{x}})}{\text{Var}(f^T\underline{\hat{x}}^{*})}\\ \leq & 1+\lambda_{\mathrm{max}}(M_{{L_A}}M_{{L\!_{A^\bot}}}) \end{aligned} $$
(6)

with \( \underline {\hat {x}}^{*}\) denoting the optimal (minimum-variance) least-squares estimator. Matrices \(M_{{L_A}}\) and \(M_{{L\!_{A^\bot }}}\) are given by \(M_{{L_A}} = Q_{p} L_A^T(Q_{e}+LQ_{p}L^T)^{-1}L_A\) and \(M_{{L_{A^\bot }}} = Q_{p}L_{A^\bot }^T(Q_{e}+L\!_{A^\bot }Q_{p}L\!_{A^\bot }^T)^{-1}L\!_{A^\bot }\), where \(L_{A}= AA^+L\) and \(L\!_{A^\bot } = L-L_{A}\). The symbols \(\lambda _{\mathrm {min}}(\cdot )\) and \(\lambda _{\mathrm {max}}(\cdot )\) denote the minimum and maximum eigenvalues of a matrix, respectively. \(\boxtimes \)

Proof

The proof is given in Appendix.\(\hfill \Box \)

To better appreciate the bounds in (6), compare the suboptimal least-squares estimator (5) with its minimum-variance counterpart (Koch 1999; Teunissen 2000)

$$\displaystyle \begin{aligned}{} \underline{\hat{x}}^*{=} (A^TQ_y^{-1}A)^{-1}A^TQ_y^{-1}\underline{y},\quad \text{with}\quad Q_y {=} Q_{e}+L Q_{p}L^T \end{aligned} $$
(7)

The theorem states that if, instead of the full variance matrix \(Q_y\), the weighting of the observation vector \( \underline {y}\) in (4) is conducted based on the known part \(Q_e\), the increase in the variance of the solutions \(f^T \underline {\hat {x}}\) relative to that of their optimal counterparts \(f^T \underline {\hat {x}}^{*}\) can always be bounded by (6). Ideally, we wish to have the bounds \((1+\lambda _{\mathrm {min}})\) and \((1+\lambda _{\mathrm {max}})\) close to unity. Their deviation from unity is due to the presence of the nonnegative eigenvalues \(\lambda _{\mathrm {min}}\) and \(\lambda _{\mathrm {max}}\). They indicate smallest and largest precision-loss that is experienced by the suboptimal estimator (5), respectively.

Such precision-loss is driven by the product of the two matrices \(M_{{L_A}}\) and \(M_{{L\!_{A^\bot }}}\), each of which being a function of the orthogonal projections \(L_{A}\) and \(L\!_{A^\bot }\) of matrix L, respectively. Here, the orthogonality is defined with respect to the inner-product metric \(Q_e^{-1}\). Thus \(L_{A}^TQ_e^{-1}L\!_{A^\bot }=0\) and \(L = L_{A}+L\!_{A^\bot }\). This implies, for nonzero matrices L, that the two matrices \(M_{{L_A}}\) and \(M_{{L\!_{A^\bot }}}\) cannot simultaneously be made zero. In fact, these two matrices ‘compete’ to limit the precision-loss experienced by the estimator \( \underline {\hat {x}}\). To see this, let us consider two extreme competing cases: (1) when L completely lies in the column-space of the design matrix A (i.e. when \(L\!_{A^\bot } = 0\)), and (2) when L is orthogonal to the column-space of A (i.e. when \(L_{A} = 0\)). The first case is when L can be expressed as \(L=AP\) for some matrix P. For this case, the random vector \( \underline {p}\) is completely absorbed by the parameter vector x, thus simplifying the model (4) as \( \underline {y} = A\,(x +P \underline {p})+ \underline {e}\). As a result, the model cannot distinguish between x and \( \underline {x}=x +P \underline {p}\), meaning that the uncertainty due to \( \underline {p}\) cannot be adjusted by any weighted least-squares adjustment. Both the optimal and suboptimal estimators \( \underline {\hat {x}}^*\) and \( \underline {\hat {x}}\) would therefore experience the same amount of uncertainty. This is also corroborated by the bounds in (6) as the eigenvalues \(\lambda _{\mathrm {min}}\) and \(\lambda _{\mathrm {max}}\) become zero through the equality \(L\!_{A^\bot } = 0\) (or \(M_{{L\!_{A^\bot }}}=0\)).

The second case is when \(A^TQ_e^{-1}L=0\). For this case, both the optimal and suboptimal estimators \( \underline {\hat {x}}^*\) and \( \underline {\hat {x}}\) are uncorrelated with the random vector \( \underline {p}\), i.e. \(\mathsf {Cov}( \underline {\hat {x}}^*, \underline {p}) =\mathsf {Cov}( \underline {\hat {x}}, \underline {p})=0\). This follows by applying the covariance propagation law, respectively, between (7) and \( \underline {p}\), and between (5) and \( \underline {p}\), together with the equalities \(\mathsf {Cov}( \underline {y}, \underline {p})=LQ_p\), \(A^TQ_e^{-1}L=0\) and \(Q_{y}^{-1}=Q_e^{-1}-Q_e^{-1}L(Q_p^{-1}+L^TQ_e^{-1}L)^{-1}L^TQ_e^{-1}\). Thus, both the estimators \( \underline {\hat {x}}^*\) and \( \underline {\hat {x}}\) remain intact irrespective of the uncertainty-level of \( \underline {p}\). The bounds in (6) also support this as the eigenvalues \(\lambda _{\mathrm {min}}\) and \(\lambda _{\mathrm {max}}\) become zero through the equality \(L\!_{A} = 0\) (or \(M_{{L\!_{A}}}=0\)). Apart from the two extreme cases discussed above, the maximum eigenvalue \(\lambda _{\mathrm {max}}\) is different from zero, leading the estimator (5) to lose its minimum-variance property.

The result (6) can be used to quantify the suboptimality level of PPP-RTK user parameter solutions when the correctional uncertainty, i.e. the variance matrix of \( \underline {\epsilon }\) in the reduced model (3), is unknown to the user. To set the stage for measuring the largest possible precision-loss that the user estimator can experience, one needs to make the following settings \( \underline {y}\mapsto ( \underline {u}-C \underline {\hat {c}})\), \(A\mapsto B\), \( \underline {e}\mapsto \underline {n}\), \( \underline {p}\mapsto \underline {\epsilon }\), and \(L\mapsto -C\). In the next section we employ the result (6) to assess the precision-performance of ‘multi-epoch’ formulations that are used to determine the user parameter vector b in a recursive manner.

3 Multi-epoch Formulations of the User Model

In the context of PPP-RTK, the user parameter solutions are to be computed in a near real-time manner, requiring the application of least-squares estimation in its ‘recursive’ Kalman filter forms (Kalman 1960; Simon 2006; Teunissen 2001). Accordingly, the user parameter vector b may be partitioned into a time-series of parameter vectors \(b_j\) (\(j=i,i+1,\ldots \)), where the subscripts i and j indicate the time-instance (epoch). Likewise, the time-uncorrelated observation vectors \( \underline {u}_j\) (\(j=i,i+1,\ldots \)) replace \( \underline {u}\). This gives the ‘multi-epoch’ version of the user observation equations (1) as follows

$$\displaystyle \begin{aligned} {} \underline{u}_j = B_j\,b_j + C_j\,c_j+\underline{n}_j\,,\quad j=i,i+1,\ldots \end{aligned} $$
(8)

Given the system of equations (8), the user needs to receive solutions of the correction vectors \(c_j\) from an external provider at every epoch j. In practice however, the provider disseminates state-space correction solutions at \(\tau \)-second intervals to minimize the amount of information required to be transmitted to the user (Wubbena et al 2005). The longer the sampling period \(\tau \), the less the bandwidth required for data-transmission. While each individual correction type (e.g. satellite orbits versus clocks) can have its own sampling period \(\tau \), such distinction is not made here just for the sake of presentation. We instead only show one common sampling period \(\tau \) for all correction types. Let \( \underline {\hat {c}}_{k\tau \mid k\tau }\) denote the solution of the correction vector \(c_{k\tau }\) that is obtained based on all the provider observations collected up to and including the epoch \(k\tau \), where k is a positive integer indicating the number of the \(\tau \)-second intervals. The user would need a correction solution at epoch \(i\geq k\tau \) though. To this end, such solution can be time-predicted using the delayed solution \( \underline {\hat {c}}_{k\tau \mid k\tau }\) if information about the time-behavior of the corrections would be known to the user. Such information can be expressed in terms of the corrections’ dynamic models (Teunissen 2001)

$$\displaystyle \begin{aligned} {} \underline{o}_{t}^c = c_{t} - \Phi^c\, c_{t-1}+\underline{w}_{t}^c,~\quad t=2,3\ldots \end{aligned} $$
(9)

where the randomness of the zero-sampled pseudo-observation \( \underline {o}^c_{t}\) is characterized by the time-uncorrelated process noises \( \underline {w}_{t}^c\). The transition matrix \(\Phi ^c\) links the correction parameters between two successive epochs. Thus, \(\Phi _{(j-i)}^c=\prod _{h=1}^{j-i}\Phi ^c\) (\(j>i\)) links the corrections from epoch i to epoch j. Accordingly, the sought-for correction solution can be time-predicted as \( \underline {\hat {c}}_{i\mid k\tau }=\Phi _{(i-k\tau )}^c\, \underline {\hat {c}}_{k\tau \mid k\tau }\).

As with the corrections, the time-behavior of the user parameter vectors \(b_j\) can also be incorporated into the estimation process to improve the corresponding parameter solutions. They are expressed by the following dynamic models

$$\displaystyle \begin{aligned} {} \underline{o}_{j}^b = b_{j} - \Phi^b\, b_{j-1}+\underline{w}_{j}^b,~\quad j=i+1,i+2\ldots \end{aligned} $$
(10)

where the transition matrix \(\Phi ^b\) links the user parameters over time, with the zero-sampled pseudo-observation \( \underline {o}^b_{t}\) and time-uncorrelated process noises \( \underline {w}_{j}^b\) (\(j=i,i+1\ldots \)).

3.1 Representation in Batch Forms

The user can feed the time-predicted correction solution \( \underline {\hat {c}}_{i\mid k\tau }\) into his measurement and dynamic models (8) and (10) so as to run his recursive Kalman-filter. As the below will show, different formulations for the user-filter can be established, and the user ideally wishes to adopt the formulation that can deliver parameter solutions with smallest precision-loss. To measure the precision-loss under different formulations, one can employ the result of the theorem given in (6). To do so, one first needs to form the multi-epoch version of (2), and consequently, identify the corresponding reduced model (3).

Consider the epochs within a \(\tau \)-second time-interval \(j=i,\ldots ,(k+1)\tau -1\), where it is assumed that the user initial epoch i is larger than or equal to the correction transmission-time \(k\tau \), i.e. \(i\geq k\tau \). During this time-interval, the user-filter relies on the provider filtered correction \( \underline {\hat {c}}_{k\tau \mid k\tau }\). In the next time-interval, i.e. at epoch \(j=(k+1)\tau \), the user-filter can replace the out-dated correction \( \underline {\hat {c}}_{k\tau \mid k\tau }\) by its newer counterpart \( \underline {\hat {c}}_{(k+1)\tau \mid (k+1)\tau }\). With this in mind, the multi-epoch version of (2) follows by augmenting the user measurement and dynamic models (8) and (10), with the dynamic models of the corrections (9). This reads (Teunissen 2001)

(11)

On the left-hand side of (11), the user observation vectors \( \underline {u}_j\) (\(j=i,i+1,\ldots ,(k+1)\tau \)) are accompanied by the correction solutions of the two successive time-intervals \( \underline {\hat {c}}_{i\mid k\tau }\) and \( \underline {\hat {c}}_{(k+1)\tau \mid (k+1)\tau }\), together with the zero-sampled pseudo-observations \( \underline {o}^b_{j}\) and \( \underline {o}^c_{j}\). On the right-hand side of the equation, all the involved unknowns (both the user and correction parameters \(b_j\) and \(c_j\)) are linked to the measurements via the ‘batch’ structure of the design matrices \(B_j\) and \(C_j\), together with the transition matrices \(\Phi ^b\) and \(\Phi ^c\). As with any system of observation equations, the batch-form (11) is also accompanied by a zero-mean random vector \( \underline {\varepsilon }\). This vector can be expressed as a summation of four uncorrelated terms as follows

(12)

The first term \(\mathrm {I}\) contains the user-specific measurement and process noises that are time-uncorrelated. The second term \({\mathrm {I}}\!{\mathrm {I}}\) contains the accumulative process noise due to the correction latency \(i-k\tau \), i.e., the delay in time after the corrections are filtered by the provider and the time they are provided to the user. The third term \({\mathrm {I}}\!{\mathrm {I}}\!{\mathrm {I}}\) contains the correction process noises that are also time-uncorrelated. In contrast to the first three terms however, the fourth (last) term \({{\mathrm {I}}\!{\mathrm {V}}}\) contains the correction estimation-errors \( \underline {\hat {\epsilon }}_{k\tau \mid k\tau } = \underline {\hat {c}}_{k\tau \mid k\tau }-c_{k\tau }\) and \( \underline {\hat {\epsilon }}_{(k+1)\tau \mid (k+1)\tau }= \underline {\hat {c}}_{(k+1)\tau \mid (k+1)\tau }-c_{(k+1)\tau }\) which are correlated, see e.g. Teunissen and Khodabandeh (2013). This implies that the variance matrix of \( \underline {\varepsilon }\) is not ‘block-diagonal’, preventing the recursive computation of minimum-variance parameter solutions (Teunissen 2001). This shows that the stochastic model of the PPP-RTK user-filter is always misspecified, and therefore, suboptimal in the minimum-variance sense, no matter which formulation is adopted. However, the user can still recursively compute suboptimal parameter solutions by approximating the stated variance matrix using a block-diagonal positive-definite matrix. Each approximation adopted leads to a different formulation of the user-filter. In the following we discuss three different formulations and assess their corresponding precision-loss in estimating the user parameters \(b_j\).

3.2 Case 1: Correctional Uncertainty Ignored

A straightforward choice of the block-diagonal matrix that can approximate the variance matrix of \( \underline {\varepsilon }\) is made by ignoring the uncertainty of the corrections. In other words, the external corrections \( \underline {\hat {c}}_{j\mid k\tau }\) (\(j=i,i+1,\ldots ,(k+1)\tau \)) are assumed precise enough to be treated as non-random, the scenario that is commonly exercised in practice (Khodabandeh 2021). According to this choice, the presence of the last three terms \({\mathrm {I}}\!{\mathrm {I}}\), \({\mathrm {I}}\!{\mathrm {I}}\!{\mathrm {I}}\) and \({{\mathrm {I}}\!{\mathrm {V}}}\) in (12) is discarded. Therefore, only the variance matrix of the first term \(\mathrm {I}\) is used to weight the underlying observation vectors. At every epoch j, the user would then work with the following measurement model

$$\displaystyle \begin{aligned} \left[\begin{array}{c}\underline{u}_{j}\\\underline{\hat{c}}_{j\mid k\tau}\end{array}\right]\approx \left[\begin{array}{cc}B_{j} & C_{j}\\ 0 & I\end{array}\right]\left[\begin{array}{c}b_{j}\\c_{j}\end{array}\right]\!+\!\left[\begin{array}{c}\underline{n}_{j}\\ 0\end{array}\right] \end{aligned} $$
(13)

The reduced form of the above system, together the user dynamic model (10), is used to setup the underlying user-filter, that is

$$\displaystyle \begin{aligned} {} \text{Case 1}:\, \left\{\!\!\begin{array}{ll}\text{measurement-model}: &\underline{u}_{j} {-} C_j \underline{\hat{c}}_{j\mid k\tau}\approx B_j\,b_j {+} \underline{n}_{j}\\ \text{dynamic{-}model}: &\underline{o}_{j}^b = b_{j} {-} \Phi^b\, b_{j{-}1}{+}\underline{w}_{j}^b\end{array}\right. \end{aligned} $$
(14)

Since the measurement noises \( \underline {n}_{j}\) are time-uncorrelated, the user can run his Kalman-filter in its recursive form (Teunissen 2001).

3.3 Case 2: Correction Process Noise Ignored

The second choice for approximating the variance matrix of \( \underline {\varepsilon }\) can be made by ignoring the uncertainty of the correction process noises \( \underline {w}_j^c\) over the epochs \(j=i+1,\ldots ,(k+1)\tau -1\). According to this choice, the presence of the last two terms \({\mathrm {I}}\!{\mathrm {I}}\!{\mathrm {I}}\) and \({{\mathrm {I}}\!{\mathrm {V}}}\) in (12) is discarded. The user chooses the variance matrix of \(\mathrm {I}\!+\!{\mathrm {I}}\!{\mathrm {I}}\) to weight his observation vectors. At every epoch j, the user would then work with the following measurement model

$$\displaystyle \begin{aligned} \left[\begin{array}{c}\underline{u}_{j}\\\underline{\hat{c}}_{j\mid k\tau}\end{array}\right]\approx \left[\begin{array}{cc}B_{j} & C_{j}\\ 0 & I\end{array}\right]\left[\begin{array}{c}b_{j}\\c_{j}\end{array}\right]\!+\!\left[\begin{array}{c}\underline{n}_{j}\\\sum_{h=k\tau+1}^{j} \!\!\! \Phi_{(j-h)}^c\,\underline{w}_{h}^c\end{array}\right] \end{aligned} $$
(15)

Similar to Case 1, the reduced form of the above system, together (10), is used to setup the underlying user-filter, that is (compare with 14)

(16)

Since the uncertainty of \( \underline {w}_j^c\) is ignored, the reduced measurement noise vectors \( \underline {n}_{j}-\!\!\sum \limits _{h=k\tau +1}^{j} \!\!\! C_j\,\Phi _{(j-h)}^c\, \underline {w}_{h}^c\) can be treated as if they are time-uncorrelated, allowing the recursive

computation of the user parameter solutions. As with Case 1, Case 2 also delivers suboptimal parameter solutions. In contrast to Case 1 however, Case 2 incorporates the uncertainty due to the time-prediction of the corrections \( \underline {\hat {c}}_{j\mid k\tau }\) into the measurement model.

3.4 Case 3: Correction Estimation-Error Ignored

As stated previously, it is only the last term \({{\mathrm {I}}\!{\mathrm {V}}}\) in (12) that makes the user-filter misspecified. One may therefore approximate the variance matrix of \( \underline {\varepsilon }\) by neglecting the presence of \({{\mathrm {I}}\!{\mathrm {V}}}\). The rationale behind such approximation is that the provider filtered solutions \( \underline {\hat {c}}_{k\tau \mid k\tau }\) can become precise enough so as to neglect their estimation error \( \underline {\hat {\epsilon }}_{k\tau \mid k\tau }\) when the duration of the provider-filter initialization, i.e. the time-difference between the epoch \(k\tau \) and the initial epoch \(t=1\), becomes sufficiently large (e.g., \(\sim \)1 h), see (Wang et al 2017; Khodabandeh 2021; Psychas et al 2022). Upon making this approximation, the user would then work with the following measurement and dynamic models (compare with 16)

(17)

Note the difference between the formulation of Case 3 and those of the two earlier cases. In Case 3, the system is not reduced for the correction parameters \(c_j\). This is because the reduced measurement noise vectors \( \underline {n}_{j}-\!\!\sum \limits _{h=k\tau +1}^{j} \!\!\! C_j\,\Phi _{(j-h)}^c\, \underline {w}_{h}^c\) are time-correlated. In order to run the filter in its recursive form, the user therefore has to work with the augmented state-vector \([b_j^T,c_j^T]^T\) instead.

To numerically evaluate the maximum precision-loss experienced by the user-filter under the formulation of the three cases discussed above, we employ the result (6) and compute the square-root of the upper-bound, i.e. \(\sqrt {1\!+\!\lambda _{\mathrm {max}}}\), for the case where a dual-frequency Galileo user (E1/E5a) is provided with clock-, bias- and ionospheric- corrections every \(\tau \) seconds. The eigenvalue \(\lambda _{\mathrm {max}}\) is evaluated on the basis of the variance matrix corresponding to the multi-epoch batch model (11). The corresponding results as a function of the correction latency \(i-k\tau \) is shown in Fig. 1. As illustrated in the figure, the stated upper-bounds of all the three cases are close to unity in the absence of correction latency (i.e. when \(i=k\tau \)), indicating that they would deliver parameter solutions almost as precise as those of the minimum-variance estimation. However, the suboptimality levels of Cases 1 and 2 rapidly get worse the higher the latency becomes (the red and blue curves). Provided that the duration of the provider-filter initialization is sufficiently long, the precision-loss associated with Case 3 remains marginal though (see the green curves in the right-panel of the figure).

Fig. 1
figure 1

The maximum increase in the standard-deviation ratio of the suboptimal-to-optimal estimation of the user parameters using network-derived corrections of a single station (thick lines) and twenty stations (dashed lines). The duration of the provider-filter initialization is set to 5 min (left) and 1 h (right). The results of Cases 1, 2 and 3 are indicated in red, blue and green, respectively

Next to the primary evaluation in Fig. 1, we also make use of a Galileo dual-frequency (E1/E5a) real-world data-set to study the positioning performance of the misspecified user-filter. The data-set was collected with a 1Hz sampling-rate on 21 January 2022 by two GNSS permanent stations: CUT0 and UWA0, both located in Western Australia. The precise orbital corrections are a-priori applied to the data. To emphasize the performance of the proposed filter formulations (i.e. Cases 2 and 3) in handling time-delayed corrections, we consider rather high correction latencies than the typical latency of 5–10 s of current IGS real-time PPP corrections (https://igs.org/rts/), see, e.g., Leandro et al (2011). The clock corrections are made available to the user every 10 s, ionospheric corrections every 30 s, and phase-bias corrections every 10 min. The corrections are generated via a single-station PPP-RTK setup (Khodabandeh 2021), where the duration of the provider-filter initialization is set to 1 h. Station CUT0 serves as correction-provider, whereas station UWA0 serves as user that is about 8km away from the provider.

In order to infer the overall performance of the user-filter under the formulations offered by Cases 1, 2 and 3, we generate 300 different realizations of the filtered positioning solutions by shifting the user-filter starting epoch i every 15 s. The time-series of the medians (i.e. 50% percentiles) of these realizations within the area of their 25% and 75% percentiles are presented in Figs. 2 and 3 for the both the user ambiguity-float and-fixed options, respectively. The medians of the positioning errors corresponding to Cases 2 and 3 are shown to be considerably smaller than those of Case 1. The results also indicate that Case 3 outperforms Case 2 as it, on average, delivers smaller medians of the positioning errors. In particular, the difference in their performance becomes considerable when the user fixes his float ambiguities. Note also the presence of periodic jumps of the medians for all the three cases. This behaviour is due to the periodic nature of the correction latencies that vary from zero to \(\tau -1\) s for each data-transmission interval. The corresponding periodic peaks become more pronounced in the solutions of the east component when the float ambiguities are wrongly fixed.

Fig. 2
figure 2

Ambiguity-float results: The medians (50% percentiles) of the absolute positioning errors corresponding to 300 user-filter realizations within the area of their 25% and 75% percentiles. The horizontal axes indicate the time lapsed (in seconds) since the user-filter has started. The results of Cases 1, 2 and 3 are indicated in red, blue and green, respectively

Fig. 3
figure 3

Ambiguity-fixed results: The medians (50% percentiles) of the absolute positioning errors corresponding to 300 user-filter realizations within the areas of their 25% and 75% percentiles. The horizontal axes indicate the time lapsed (in seconds) since the user-filter has started. The results of Cases 1, 2 and 3 are indicated in red, blue and green, respectively

4 Concluding Remarks

In this contribution we presented a general means for measuring the precision-loss that is experienced by the misspecified PPP-RTK user-filter. It was addressed why the stochastic model of the user-filter is always misspecified, irrespective of the multi-epoch formulation adopted, cf. term \({{\mathrm {I}}\!{\mathrm {V}}}\) in (12).

By discussing three different formulations for the user-filter, it was demonstrated that the user can potentially limit the suboptimality level of his filter, i.e. when the correction latency is not high and when the duration of the provider-filter initialization is sufficiently long. In contrast to the commonly-used multi-epoch formulation (Case 1), our proposed formulations (Cases 2 and 3) were shown to deliver user parameter solutions that are almost as precise as those of the minimum-variance estimation.