Two-step success rate criterion strategy: a model- and data-driven partial ambiguity resolution method for medium-long baselines RTK

When GNSS measurement errors such as ionospheric delays remain large, full ambiguity resolution (FAR) takes an unacceptably long time to fix ambiguities to integers. Partial ambiguity resolution (PAR), under this circumstance, is a possible solution to obtain precise positioning before FAR is achieved. PAR fixes a subset of ambiguities instead of all to improve either the fix rate, success rate or positioning accuracy according to different criteria. This contribution proposes a two-step success rate criterion (TSRC) strategy that first chooses ambiguities to fix using a given success rate threshold and then adds more ambiguities to fix by maximizing the expectation of baseline precision improvement from fixing ambiguities. Then, the TSRC strategy is compared with two other commonly used PAR strategies and the FAR strategy in an experiment with real data forming a medium-long-baseline setup (baselines longer than 15 km and shorter than 50 km). The results show that in medium-long-baseline cases, the TSRC achieves the shortest time to first fix (TTFF), which is 100–200 s shorter than other PAR strategies and 400–800 s shorter than the FAR strategy, excluding cases in which FAR fixes no ambiguities at all. Consequently, the TSRC yields the highest positioning accuracy on average. In addition, the variance–covariance (VC)-matrix of the float ambiguities is found to have a heavy impact on the TSRC strategy in some cases, and amplifying the VC-matrix before the ambiguity fixing process can partly mitigate it.


Introduction
With the recent rapid development of the BDS and Galileo navigation systems, the number of available satellites for GNSS positioning has greatly increased, which has enhanced the satellite geometry and measurement redundancy. However, when large ionospheric and multipath errors exist, rapid GNSS precise positioning is hindered because full ambiguity resolution (FAR) takes a long time (Li et al. 2015a). Due to the increased satellite number, partial ambiguity resolution (PAR) may be a technique to bypass this hindrance. PAR fixes a subset of ambiguities instead of all ambiguities to improve the fix rate, success rate or positioning accuracy according to different criteria. Teunissen et al. (1999) initiated the idea of PAR and proposed a success rate criterion (SRC)-based strategy, which excludes ambiguities until a preset success rate threshold is met. Mowlam (2004) and other researchers studied elevation ordering strategies, which iteratively exclude the lowest elevation satellite until a preset success rate threshold is met. Feng (2008) and other researchers studied the wide lane-narrow lane (WL-NL) and extra wide lane-wide lane-narrow lane (EWL-WL-NL) strategies, which fix EWL, WL and NL ambiguities in a cascading way. In WL-NL and EWL-WL-NL strategies, fixing the NL ambiguities is essential to obtain a high-precision position and is also the bottleneck due to its short wavelength. Li et al. (2015b) proposed a combination of the EWL-WL-NL and SRC strategies, which first fixes EWL and WL ambiguities and then fixes a subset of NL ambiguities chosen using SRC. Wen et al. (2015) selected the subset that minimizes the trace of the variance-covariance (VC)-matrix of the fixed solution, constrained on stable float ambiguity estimates over a time window. Ji et al. (2018) proposed several linear combinations of ordered FAR integer candidates that met different positioning precisions and failure rates and left users to choose their preferred combination as a PAR strategy.
In most of the above strategies, only a prior stochastic model of the measurements is used in the subset selection, which is called model-driven PAR (Teunissen and Verhagen 2008). Its advantage is that one can select a subset even without knowing the real data, while its disadvantage is that the discrepancy between the stochastic model and the real measurement may affect the subset selection. To overcome the disadvantage of model-driven PAR, researchers suggest iteratively reducing the model-driven ambiguity subset if the fixed ambiguities fail the validation process. Dai et al. (2007) suggest excluding ambiguities whose values differ in optimal and suboptimal integer candidates after FAR is rejected using a ratio test (Euler and Schaffrin 1991). Parkins (2011) suggested ordering all possible subsets using the signal to noise ratio (SNR) or ambiguity dilution of precision (ADOP) (Teunissen 1997b) and testing the ordered subsets with a ratio test until one passes. Brack and Günther (2014) tested every element of a full ambiguity vector to exclude those that failed the difference test introduced in their work. Gao et al. (2017) proposed iteratively excluding ambiguities to meet three criteria: success rate criterion, elevation cutoff angle and ratio test in the NL ambiguity resolution after fixing WL ambiguities. Castro-Arvizu et al. (2021) proposed a precision-driven method, which iteratively validates the fixed ambiguities and reduces the subset size chosen using an SRC strategy, to meet the criteria that the trace of the VC-matrix of the fixed solution be less than a given threshold. Hou et al. (2016a) presented a model-and data-driven PAR strategy, named the two-step success rate criterion (TSRC), which extends the ambiguity subset from SRC by maximizing the improvement of positioning precision gain. TSRC is also designed to guarantee an incorrect fixing rate below a given threshold (e.g., 0.001). Compared with the above model-and data-driven strategies, which include iterative ambiguity validation processes, TSRC saves the iterative integer least squares (ILS) process and ratio test and has been demonstrated through simulation to improve the positioning precision while maintaining a low incorrect fixing rate. In this contribution, the TSRC strategy is tested with real medium-long-baseline measurements and compared with typical PAR strategies such as SRC and WL-NL. As a control group, FAR is also included in the comparison.
First, the general PAR process is briefly described, and PAR strategies such as WL-NL, SRC and TSRC are introduced. Second, the performance of these PAR strategies and FAR using real measurements is compared and discussed. Finally, a summary of the experiments and the outlook of future work are given.

Methodology
The general double differenced measurements for a visible satellite are as follows: where P is the pseudorange, L is the carrier phase measurement in meters, i is the frequency, E i is the line of sight vector from receiver to satellite, r is the baseline vector between base and rover stations, T is the tropospheric delay, I is the ionospheric delay, is the wavelength of carrier phase, N is the carrier phase ambiguity, P and L denote the measurement noise of pseudorange and carrier phase, and is the first-order factor of ionospheric delay, which is calculated as follows: where f 1 and f i are the frequencies of the L1 signal and ith signal, respectively.
Combining the observations of all visible satellites into one linear equation, the float solution of the equation can be estimated using a Kalman filter and written as (1) x contains r, T and I in the case of medium-long baselines, and ⌢ N contains the float ambiguities of all visible satellites. The notation ⌢ ⋅ denotes the float solution, and ⌣ ⋅ denotes the integer ambiguity solution and the baseline parameters updated using integer ambiguities. The VC-matrix of the float solution is propagated from the VC-matrix of the measurements and is denoted as and Q is usually used as the nominal precision of the float solutions.
The float ambiguities ⌢ N can be fixed using the least squares ambiguity decorrelation adjustment (LAMBDA) (Teunissen 1995)  and ⌢ x can be updated as follows: where ⋅ q in ⌢ N pq and ⌣ x q indicates that they are updated using

WL-NL strategy
Generally, the WL-NL strategy first fixes wide-lane ambiguities and then fixes narrow-lane ambiguities. This strategy has been studied by many researchers in the past, and there are two common approaches to form WL ambiguities. The first approach forms the WL phase measurement combination, and N WL is solved together with N 1 . The observation equations are as follows: with WL calculated as follows: where c denotes the speed of light and f 1 and f 2 denote the frequencies of L1 and L2, respectively.
The second approach does not form the WL phase measurement combination but uses N 2 = N 1 − N WL in the L 2 equation, and N WL is solved together with N 1 . The observation equations are as follows: with 1 and 2 calculated using (3). The second approach is preferred in our study since it avoids the measurement noise amplification in the first approach (Guo et al. 2014).

Success rate criteria (SRC) strategy
The SRC strategy is based on general geometry-based models but chooses to fix a subset of ambiguities by the success rate criteria instead of fixing all ambiguities. The original L 1 , L 2 , P 1 and P 2 measurements are used in a geometry-based model. The original float N 1 , N 2 ambiguities and baseline parameters are obtained following (1)-(5). Then, N 1 and N 2 ambiguities are decorrelated using the Z-transformation of (11) LAMBDA, where the decorrelated ambiguities ⌢ z are sequentially ordered such that the last one is most precise while the first one is least precise.
The Z-transformation is written as with Z being an invertible integer matrix, and Q ⌢ z ⌢ z is calculated following the variance propagation law as follows: Among the Z-decorrelated ambiguities, the SRC tests from the last ambiguity if the bootstrapping success rate is higher or equal to a given success rate threshold P 0 . If the bootstrapping success rate is higher than P 0 , it adds the last remaining ambiguity into the PAR subset. This iterative process stops when the bootstrapping success rate is lower than P 0 or all ambiguities are included in the subset. Denoting the subset chosen by SRC with ns ambiguities as z src , the bootstrapping success rate P s (ns) is calculated as follows (Teunissen 1998): where n is the total number of ambiguities, ns is the number of selected ambiguities, Φ is the cumulative distribution function of the normal Gaussian distribution, and ⌢ z i|I is the conditional standard deviation of the ith Z-transformed ambiguity, which is the ith diagonal entry of the D matrix from the LDL decomposition of After ns is determined using (23), z 2 and Q ⌢ z 2 ⌢ z 2 are determined, and the ILS search and a ratio test are performed to fix z 2 .

Two-step success rate criterion (TSRC)
The two-step success rate criterion (TSRC) is derived from the SRC strategy. It chooses a subset larger than or equal to z src , named z lar , and then fixes z lar to integers using ILS search and tests the integer candidates using the fixed failure rate ratio test (FFRT) (Hou et al. 2016b;Verhagen and Teunissen 2013). If the test is not passed, then it fixes z src using ILS and tests with a normal ratio test. The ratio test in the TSRC is defined as (Verhagen and Teunissen 2012) follows: where ⌣ z o and ⌣ z s denote the optimal and suboptimal integer candidates given by the ILS search, respectively, , and is the critical value, which is a positive scalar that is always smaller than or equal to 1. The reason FFRT is not applied in the second step is as follows. The threshold value in the FFRT is generated using a Monte Carlo simulation, with an assumption that the float ambiguities are normally distributed. In the TSRC, if we enter the second step, it immediately means that the FFRT rejects z lar in the first step. In integer aperture estimation theory (Teunissen 2003;Verhagen 2004), the rejection of z lar by the FFRT means that float z lar does not lie within the "correct acceptance region" of the normal distribution. If z lar is precluded from a certain region of the normal distribution, the smaller subset float z src is also precluded from a certain region. This makes the float z src no longer normally distributed; thus, the ground assumption of the FFRT no longer exists. Therefore, we cannot use the FFRT in the second step, and instead, we use a constant threshold value for the ratio test. Since the selection process of z src meets the success rate criterion, we believe it will have a high success rate. Therefore, a threshold value of = 0.9 is used in the ratio test in the second step. More details about this explanation are given in Hou et al. 2016a.
The subset z lar is selected as follows. z lar is expected to contribute to making the baseline precision as large as possible, conditioned on reliable fixing. The more ambiguities that are fixed correctly, the higher the positioning precision that can be obtained. On the other hand, the more ambiguities to fix, the lower the fix rate will be in the case of a poor VC-matrix of the float ambiguities. However, either a low fix rate of many ambiguities (high-precision improvement) or a high fix rate of few ambiguities (low-precision improvement) is not our aim. Instead, our aim is to achieve a good balance between the precision improvement and the fix rate.
To quantify the baseline precision improvement by fixing ambiguities, Teunissen (1997a) of "precision gain." Following this concept, we define the "precision gain" in the TSRC strategy as follows: where Q ⌢ x ⌢ x and Q ⌣ x ⌣ x ⌣ z are the 3 × 3 dimensional VC-matrices of float baseline coordinates and fixed baseline coordinates, respectively. The expected value of the precision gain by trying to fix z is the fix rate, which can be calculated from by a series of empirical equations (Hou et al. 2016a).
The expected value improvement of precision gain using z lar over z src is proper ambiguity subset z lar should maximize the expected value improvement of precision gain E(Δg).
The whole TSRC process can be described by Fig. 1. First, a subset z src is chosen using the success rate criterion with (23), and the subset z lar is extended from z src following (25)-(27). Then, z lar is fixed using the ILS search and tested using the FFRT. If the FFRT passes, z lar is used to update the float coordinate solution. Otherwise, z src is fixed using ILS and it is tested with a constant threshold ratio test. If the ratio test passes, z src is used to update the float coordinate solution. Otherwise, the float coordinate solution is accepted as the final solution.
Notably, as the ratio test is performed in each step of the TSRC, the failure rate in the whole ambiguity fixing process is supposed to be well controlled, and the simulation results have demonstrated that (Hou et al. 2016a). Thus, the TSRC is designed to maximize the expected improvement of precision gain while still meeting the failure rate requirement.
The 5 AR strategies described above are listed in Table 1 with a brief description.

Discussion: Model driven or data driven
As mentioned in the introduction, a model-driven PAR strategy uses only a prior stochastic model of the measurements to select the subset, while a data-driven PAR strategy usually uses a ratio test to iteratively exclude the ambiguity and validate the subset, which is supposedly to take advantage of information from real data.
According to this principle, WL-NL is a model-driven strategy since the WL-NL strategy determines a subset of WL ambiguities without the need for real measurements. The SRC is also a model-driven strategy since it determines a subset with only satellite geometry but not measurements, as the success rate is computed using the VC-matrix of the float ambiguities, which is propagated from the measurement stochastic model.
The TSRC, however, selects a large subset extended from the model-driven SRC. Whether the large subset will be used to improve the baseline depends on the ratio test, which is the data-driven part. Thus, the ratio test plays an important role in the TSRC. One should not use too low or too high a threshold for the ratio test to avoid a high failure rate or a Fix the subset of ambiguities reliably that maximizes the precision gain WL-NL 1) Use N 2 = N 1 − N WL in the L 2 equation to solve N 1 and N WL ; 2) First fix N WL then N 1 using LAMBDA high false alarm rate. The FFRT can provide a proper threshold for the ratio test by considering the tolerated failure rate, the number of ambiguities and their VC-matrix. This is the reason why the FFRT is preferred to the normal ratio test in the TSRC. The advantage of adding the data-driven part is that it may enlarge the SRC subset, as the SRC subset is often excessively small due to an improper stochastic model. From another point of view, if the stochastic model perfectly suits the measurements, the SRC should select the largest subset and fix it with a high success rate, and the TSRC is no longer needed. This is the ultrashort baseline case where most errors are eliminated by double differencing.

Fig. 1 Diagram of the TSRC
A strategy that combines the SRC with the iterative ratio test is also model-and data-driven. However, the iterative run of the LAMBDA algorithm consumes too much time, which hinders it from real-time applications.
Another advantage of the TSRC is that it uses original observations instead of linear combinations of observations, avoiding amplification of measurement noise. Moreover, compared to the EWL-WL-NL and WL-NL strategies, which need multifrequency measurements to form specific linear combinations, the TSRC is a general PAR strategy that applies to either single-, dual-or triple-frequency cases.

Experiment
To generally compare the performance of the above AR strategies, we form three random medium-long baselines among the Hong Kong Geodetic Survey Services SatRef and use all available multiple-frequency GPS + Galileo + BDS + GLONASS observations in the experiments.
The above four AR strategies are implemented in modified RTKLIB software (Takasu and Yasuda 2009). The data processing options are summarized in Table 2.
In the float solution estimation process, the rover is supposed to be in kinematic mode, and its coordinates change at every epoch. The unknown parameters to be estimated include baseline coordinates, float ambiguities, slant total electron content (TEC) and zenith tropospheric delay (ZTD), as the ionospheric delay and tropospheric delay cannot be canceled in medium-long baseline scenarios. In the state update process of the extended Kalman filter (EKF), the baseline coordinates initialize at every epoch; the float ambiguities, slant TEC and ZTD inherit from the last epoch with random walk noises added; and the integer ambiguities are refixed from the float ambiguities at every epoch.
The time to first fix (TTFF) is the chosen criterion to compare these AR strategies. The TTFF is defined as the first epoch in which the ambiguities are fixed and remain fixed thereafter, and the 3D positioning error remains smaller than 5 cm, i.e., | ⌢ x − x| ≤ 5 cm. x is the "true position" with subcentimeter accuracy, calculated by CANPPP (Tétreault et al. 2005) using the precise point positioning (PPP) technique. The processing is reinitialized every 1 h to generate more chances for evaluating the TTFF, and if no ambiguities are fixed after 1 h, we assume it fails to fix.
The performance of the four AR strategies is compared as follows. Figure 2 shows the 3D positioning errors and the number of fixed ambiguities of the 49.9 km baseline HKNP-HKWS in 1 h. It is clearly seen that the 3D positioning error of the TSRC first drops below the 5 cm level at 248 s, where 33 of 37 ambiguities are fixed. Following the TSRC, the 3D   Figure 3 shows the results for the 42.5 km HKSL-HKWS (left panel) and 21.4 km T430-HKWS baselines (right panel). As shown, in the 42.5 km baseline case, the TSRC, SRC, WL-NL and FAR strategies fix all ambiguities and reach 5 cm level positioning accuracy at 327 s, 465 s, 799 s and 799 s, respectively. In the 21.4 km baseline case, they fix all ambiguities and reach 5 cm level positioning accuracy at 230 s, 470 s, 712 s and 712 s, respectively. In these two shorter baselines, the WL-NL and FAR strategies fix all ambiguities within 1 h, which is probably due to the better model strength. Note that the curves of the FAR and WL-NL strategies coincide.

Good cases
The upper panel of Fig. 4 shows the TTFFs of the four AR strategies for the 49.9 km baseline in all 24 h of a day. As shown, the TSRC strategy achieves the shortest TTFFs in most cases, while the FAR strategy does not fix ambiguities within 1 h. The TTFF differences of the TSRC and WL-NL strategies are approximately 2 to 34 min, and the TTFF differences of the TSRC and SRC strategies are approximately −5 to 19 min. The upper panel of Fig. 4 shows that from 10:00 to 21:00 at local time, all four AR strategies have longer TTFFs than during the other hours, while the lower panel shows that the ionospheric delay errors of different satellites from approximately 10:00 to 21:00 are larger than during the other hours. The similar variation pattern of the TTFFs and ionospheric delay reveals that high ionospheric activity hinders the ambiguity resolution.

Bad cases
The TSRC and SRC strategies rely heavily on the VC-matrix of the float ambiguities to select the ambiguity subset and determine the ratio test threshold. The VC-matrix of the float ambiguities is propagated from the VC-matrix of the pseudorange and carrier phase measurement, which can hardly be set accurate even though many variance estimation methods have been proposed (Li et al. 2010;Zhang et al. 2018).
In the following experiment, the effect of an inaccurate VC-matrix of the float ambiguities on the TSRC and SRC strategies can be observed.
The left panel of Fig. 5 shows that in the 21.4 km T430-HKWS baseline, all four AR strategies do not obtain 5 cm accuracy during 15:00-16:00 at local time, and the TSRC even yields worse accuracy than the float solution, although it fixes all ambiguities at 1000 s. The reason could be that the float ambiguities and baseline vector are estimated to be inaccurate due to large ionospheric delay errors, but the VC-matrix of the float ambiguities and baseline vector is propagated to be overly optimistic since the preset VCmatrix of the measurements does not accurately reflect the real stochastic characteristics of the ionosphere-impacted pseudoranges and carrier phases. Note that the curves of the FAR and WL-NL strategies coincide.

Strategy modification for bad cases
For these bad cases where fixing ambiguities obtains worse positioning accuracy than float solutions, we try to prevent fixing a large subset of ambiguities. Since the VC-matrix of the float ambiguities is too optimistic for the real stochastic characteristics, we decide to amplify it using multiplying How to apply an adjusted float ambiguity VC-matrix is outside the scope of this article, but a suggestion could be using the local ionosphere activeness as an index.

Summary and discussion
This study compares three common PAR strategies and the FAR strategy in RTK with randomly selected mediumlong baselines in the Hong Kong SatRef network. From the results, we have the following conclusions: 1. PAR strategies such as the SRC and TSRC can shorten the TTFF in medium-long baselines where the FAR is difficult to achieve. 2. Among the four AR strategies, the TSRC achieves the shortest TTFF in most cases and obtains the highest positioning accuracy on average. 3. During periods of high ionospheric activity, the TSRC strategy obtains worse accuracy than the float solutions, although it fixes most ambiguities. The performance can be improved by tuning the VC-matrix of the float ambiguities.
In summary, these experiments indicate that the TSRC performs best among the four common AR strategies regarding the TTFF and positioning accuracy. For all AR strategies, a good stochastic model of the measurements is essential, as it propagates to a good VC-matrix of the float ambiguities.
Furthermore, although the TSRC is tested in RTK mode, it is also applicable in positioning techniques where AR is needed, such as PPP-AR and PPP-RTK. There might be new challenges for the TSRC to perform in PPP-AR and PPP-RTK, as the external corrections such as phase center bias and ionospheric correction bring extra measurement noises.