Abstract
The paper describes a procedure based on predictive simulation for comparing the recursive suboptimal algorithms developed for nonlinear filtering problems including navigation data processing problems. The algorithms are compared in terms of accuracy, consistency, and computational complexity. The provided examples explain the procedure and illustrate its application.
Avoid common mistakes on your manuscript.
1 INTRODUCTION
Measurement data processing algorithms play an important role in designing the modern navigation and radiotechnical systems [1–10]. Among them, the algorithms assuming the random character of the processed signals and sensor errors are widely applied. These algorithms can be effectively designed with the Bayesian filtering theory, which is used to design the mean-square optimal estimators under assumptions that the signals and measurement errors are random processes or sequences [11–20].
Markov processes that can be described with the shaping filters as differential or difference equations are the most extensively used. For linear models with the Gaussian errors, a widely known Kalman filter is a universal (meaning simple and computationally convenient) optimal estimator [2, 11]. With some nonlinearities in the equations of shaping filters and equations describing the dependence of the measurements on the estimated parameters, no universal algorithm is available, which motivates the researchers to design simplified (suboptimal) algorithms [4, 5, 9, 10]. These algorithms, on the one side, should achieve the accuracy close to the potential accuracy of the optimal algorithm, and on the other, should not be computationally intensive.
It should be also mentioned that the filtering algorithms developed within the stochastic Bayesian approach can account for the measurement errors and generate both the estimates and their accuracy characteristics in the form of estimated error covariance matrix. This is particularly relevant for the navigation data processing problems because to solve them we need to obtain the estimated parameters and the quantitative characteristic of the estimation accuracy. The algorithms where the estimated covariance matrix agrees with the true one are called consistent [6, 21, 22]. It should be borne in mind that the term consistency commonly applied to analyze the features of the estimates [8] is used here [6] to determine the performance of the estimators. It follows from the above said that designing the suboptimal algorithms for nonlinear filtering problems inevitably involves a need to compare them in terms of accuracy, consistency, and computational complexity.
The comparison is often based on simulation using the method of statistical trials. Its principle is as follows: first, the signals and measurement errors are multiply simulated with the accepted models, and then used to find the sought estimates and their errors with the developed algorithm [3, 5, 23–25]. This approach is being increasingly applied. This simulation is further referred to as predictive simulation meaning that the algorithm efficiency can be finally determined only in real tests.
No generally accepted procedure for the algorithm comparison has been formulated up to date. In some cases the authors limit themselves to one-time solution of the problem without applying the method of statistical trials. This technique is typical of the algorithms based on deterministic approach [26–30]. It is not uncommon that the proposed algorithm is not compared with the other algorithms, rather, only its performance is demonstrated.
The notion of algorithm consistency has been recently introduced and is rarely covered in Russian publications [21, 22, 31], so even less attention is paid to the analysis of consistency. However, multiple simulation and processing of many samples predetermines the computation of unconditional real error covariance matrix that should be further compared with the unconditional estimated covariance matrix to assess the consistency. Even if these computations are made [23–25], the algorithms are compared qualitatively in terms of accuracy and computational burden. No quantitative characteristics are usually introduced in the consistency analysis.
In solving the nonlinear problems it is important to bear in mind that the posterior (conditional) probability density function (PDF) of the estimated parameters and their errors are not Gaussian, so even knowing the root mean square (RMS) values does not ensure comprehensive characterization of the estimation errors.
This paper is devoted to the development of procedure for comparing the nonlinear filters in terms of accuracy, consistency, and computational burden based on predictive simulation. In this research, the authors elaborate the results obtained in [32].
The paper is structured as follows. Section 2 formulates the nonlinear filtering problem and generally describes the optimal solution algorithm within the Bayesian approach. The proposed procedure is detailed in Section 3 and explained in Section 4 by the example of nonlinear filtering of a Markov sequence, where the system equations contain the second-order nonlinearity, and the measurements are linear. In Section 5, the application of the procedure is illustrated by estimating the performance of extended and unscented Kalman filters in map-aided navigation. The main conclusions are given in the Conclusions section.
2 PROBLEM STATEMENT
We consider the following nonlinear discrete filtering problem: to estimate an n-dimensional random vector \({{x}_{k}}\) described by the shaping filter [31]
by m-dimensional measurements
where \({{f}_{k}}\left( {{{x}_{{k - 1}}}} \right)\) and \({{h}_{k}}({{x}_{k}})\) are the known nonlinear functions; k is the index of discrete time; x0 is the n‑dimensional random Gaussian vector with the PDF \(p\left( {{{x}_{0}}} \right) = N\left( {{{x}_{0}};{{{\bar {x}}}_{0}},P_{0}^{x}} \right)\)(here and below \(N\left( {a;\bar {a},A} \right)\) denotes the density of a Gaussian random vector a with mathematical expectation \(\bar {a}\) and covariance matrix A; wk is the nw-dimensional zero-mean discrete Gaussian white noise independent of x0 with the known covariance matrix Qk; Γk is the known n × nw matrix; \({{v}_{k}}\) is the m-dimensional zero-mean discrete Gaussian white noise independent of x0 and wk, with the known covariance matrix Rk.
It is known that in Bayesian filtering, the mean-square optimal estimate \(\hat {x}_{k}^{{OPT}}\left( {{{Y}_{k}}} \right)\) and conditional covariance matrix \(P_{k}^{{OPT}}\left( {{{Y}_{k}}} \right)\) are determined as [5, 12, 14, 33–35]:
where \(p\left( {{{{{x}_{k}}} \mathord{\left/ {\vphantom {{{{x}_{k}}} {{{Y}_{k}}}}} \right. \kern-0em} {{{Y}_{k}}}}} \right)\) is the conditional PDF for the measurement set Yk = (y1, … yk)T, hereinafter referred to as the posterior density, E is the mathematical expectation with the subscript explaining based on which PDF it is computed. We underline that the optimal estimate minimizes the conditional (4) and unconditional covariance matrices (5) [3, 5, 8, 33]:
By matrix minimization, we mean minimization of the relevant quadratic forms [3, 5, 8, 33]. In the relationships above, the conditional covariance matrix (4) determines the current estimation error for a certain sample Yk, and unconditional matrix (5) determines the error obtained by probability averaging over all possible measurements.
To solve the formulated problem, we need to have the posterior density \(p\left( {{{{{x}_{k}}} \mathord{\left/ {\vphantom {{{{x}_{k}}} {{{Y}_{k}}}}} \right. \kern-0em} {{{Y}_{k}}}}} \right)\), whose computation is rather complicated, which motivates the researchers to design different suboptimal algorithms. In its turn, it calls for the need to analyze their performance and compare them. Below we consider a procedure for comparing the algorithms in terms of accuracy and computational complexity, on the one side, and consistency, i.e., agreement between the estimated and real covariance matrices, on the other side.
3 PROCEDURE OF COMPARATIVE ANALYSIS
The proposed procedure is based on predictive simulation conducted with the widely applied Monte Carlo method. Its principle in the considered case is that at first the signals and measurement errors are simulated with the accepted models (1) and (2), which are then employed to calculate the sought estimates and their errors \(\varepsilon _{k}^{{\mu (j)}}\left( {Y_{k}^{{(j)}}} \right)\) with the analyzed algorithm:
where \(x_{k}^{{(j)}},\) \(Y_{k}^{{(j)}},\) \(j = \overline {1.L} \) are the random sequences simulated using (1) and (2); L is the total number of samples; \(\hat {x}_{k}^{\mu }\left( {Y_{k}^{{(j)}}} \right)\) are the estimates of the analyzed algorithm with superscript μ. As described for example in [31], they are used to calculate the unconditional covariance matrices serving as a basis for further accuracy and consistency analysis:
where \(P_{k}^{\mu }(Y_{k}^{{(j)}})\) are the conditional estimated covariance matrices for the algorithm μ. It is proposed to analyze the accuracy by calculating the accuracy factor \(\xi _{{ik}}^{\mu }\), which compares the RMS errors of the i-th state vector component estimation for the analyzed and basic algorithms:
where \({{[G_{k}^{\mu }]}_{{i,i}}}\) are the diagonal elements of unconditional covariance matrices determining the variances for the algorithm μ; \({{[G_{k}^{*}]}_{{i,i}}}\) are the similar values for the algorithm, which we call the basic algorithm.
These factors are usually applied in analyzing the sensitivity of linear algorithms [38]. It is desirable to use a nonlinear optimal algorithm as a basic one if it can be implemented in predictive simulation. This is an ideal situation. By optimal algorithm we mean an algorithm ensuring the specified computational error for the sought estimates. This could be, for example, an algorithm based on Monte Carlo method [36, 37]. It becomes feasible in some cases because predictive simulation is conducted in offline processing mode with no strict limitations of the computation volume. If it is difficult or impossible, as \({{[G_{k}^{*}]}_{{i,i}}}\) we can use the RMS error of one of the compared algorithms or the RMS error achieving the Cramer-Rao lower bound (CRLB) for the optimal covariance matrix [5, 39–42].
To analyze the algorithm consistency it is proposed to calculate the factor \(\varsigma _{{ik}}^{\mu }\), which estimates the difference between the real and estimated RMS errors for each state vector component:
As an additional accuracy characteristic of the analyzed estimator, it is proposed to calculate the factor determining the probability of estimation error \(\rho _{{ik}}^{\mu }\) falling within a specified interval for each i-th component. This could be for example the interval of the tripled RMS error (\( \pm 3\sigma \) interval). The probability then can be defined as
where \({\rm N}\left( {\varepsilon _{{ik}}^{{\mu \left( j \right)}}} \right)\) is the amount of estimation errors \(\varepsilon _{{ik}}^{{j{\text{ }}}},\) \(j = \overline {1.L} \) of the i-th state vector component, for which \(\left| {\varepsilon _{{ik}}^{{\mu \left( j \right){\text{ }}}}} \right| \leqslant 3{{\sqrt {\left[ {P_{k}^{{\mu \left( j \right)}}} \right]} }_{{i,i}}}.\)
In a number of cases finding the sample with the maximum errors (6) for some components at a fixed time can prove helpful. These samples can further be employed to more carefully analyze the reasons why the accuracy of suboptimal algorithm is degraded without applying a computationally intensive method of the statistical trials. The maximum errors and probability (11) are very helpful in solving the nonlinear problems with non-Gaussian distribution of estimation errors. It is also beneficial to plot histograms of the error array \(\varepsilon _{{ik}}^{j}\) obtained in predictive simulation, as it is done for example in [23–25].
To analyze the computational expenses of the considered algorithms, it is proposed to use the complexity factor
where \({{\tau }^{\mu }} = \frac{1}{L}\sum\nolimits_{j = 1}^L {t_{j}^{\mu }} ,\) \(\tau * = \frac{1}{L}\sum\nolimits_{j = 1}^L {t_{j}^{*}} \), \(t_{j}^{\mu }\) is the time spent by the processor for estimation with the analyzed algorithm, \(t_{j}^{*}\) is the minimum time required for the fastest algorithm.
The procedure of calculating these factors is described in Table 1 as a pseudocode.
Discuss the proposed procedure in more detail using the example.
4 4. METHODICAL EXAMPLE
Consider the following methodical example of a two-dimensional polynomial filtering problem of the second order, where the estimated vector xk = [x1,k x2,k]T is described by the equations [42]
and the measurements (2) are linear:
Based on (13), the formulas for the nonlinear function fk(•), matrix Γk, and vector wk included in (1) can be easily written:
Note that the considered problem statement covers the problem of estimating the correlation interval of the first-order Markov process by its measurements against the background of white noise.
To study the procedure in more detail, we compare several suboptimal estimators based on Gaussian approximation of posterior density: extended Kalman filter (EKF), second-order polynomial filter (PF2), and unscented Kalman filter (UKF) [31, 44–48].
The following parameters were set in predictive simulation: \({{\bar {x}}_{{10}}} = 2.5,\) \({{\bar {x}}_{{20}}} = 0.5,\) \(\sigma _{1}^{2} = 4,\) \(\sigma _{2}^{2} = 0.04,\) \(\sigma _{{{{w}_{1}}}}^{2} = 0.01,\) \(\sigma _{v}^{2} = 0.1,\) Δt = 0.1 s. To compute the real and estimated covariance matrices, L = 10 000 samples of the sequences (13) and their measurements (14) were simulated. A nonlinear optimal algorithm (OPT) based on sequential Monte Carlo method with 10 000 particles was run as a basic algorithm for the accuracy comparison. It can be easily seen that here the Monte Carlo method can be realized using Rao-Blackwellization procedure, because if the component x2k is fixed relative to x1k the problem becomes linear [5, 19, 35, 36].
Formulas for EKF and PF2 for the formulated problem were obtained using the relationships in Table 3 in [31]; for the optimal algorithm, from [36], and for UKF, using the pseudocode given in [45].
If the factors (9), (10) are set to 0.1, meaning that the difference from the compared value is maximum 10%, all the considered filters meet these requirements. The EKF features the lowest accuracy and consistency among the suboptimal filters. The accuracy and consistency of UKF and PF2 are comparable.
Figure 1 presents the plots of factors \(\xi _{{{{2}_{k}}}}^{\mu },\varsigma _{{{{2}_{k}}}}^{\mu }\), where the blue curve is EKF (1), black is PF2 (2), and violet is UKF (3). Factors for the first component are close for all the compared suboptimal algorithms and are not shown here.
The calculated factors (11) characterizing the probability of estimation error falling within the \( \pm 3\sigma \) interval are shown in Fig. 2.
As is known, for the Gaussian distribution law the probability of falling within \( \pm 3\sigma \) interval is 0.997 for each component. It follows from the presented results that all the algorithms except EKF provide the probability of estimation errors falling within the \( \pm 3\sigma \) interval corresponding to the Gaussian distribution law. For EKF this probability after processing the 25th measurement is \(\rho _{1}^{{EKF}} \approx 0.970\). Therefore, we can suppose that the densities of estimation errors are close to Gaussian for all the considered suboptimal estimators except the EKF.
The factors characterizing the algorithm computational complexity (12) are given in Table 2.
The factors were determined using the QuadCore Intel Core i5-4690K processor with 3.8 GHz clock speed. Comparing the complexity factors reveals that PF2 is 60% more complex the simplest EKF, and UKF is over 400% more complex. The authors are aware of the approximate character of this complexity estimate. However, the obtained results still give some understanding of the ratio of computational complexities.
5 5. APPLYING THE COMPARATIVE ANALYSIS PROCEDURE TO MAP-AIDED NAVIGATION
As an example illustrating the application of the developed procedure, consider the problem of algorithm designing for the simplest variant of map-aided navigation. This problem has aroused considerable interest in recent years with large-scale implementation of different mobile robots such as autonomous underwater vehicles [49–51]. In [52], a map-aiding algorithm is proposed, based on distinguishing the constant and varying components in the aided navigation system errors and linearizing the measurement equations with respect to the varying component. This, particularly, makes it necessary to determine which prior uncertainty of the vehicle initial position allows this linearization. Let us demonstrate that the described procedure is able to help answer this question, too.
Formulate the map-aided navigation problem following [52, 53]. Suppose that an onboard navigation system (NS) generates the vehicle coordinates \(y_{k}^{{NS}} = {{\left[ {\begin{array}{*{20}{c}} {y_{k}^{{NS(1)}}}&{y_{k}^{{NS(2)}}} \end{array}} \right]}^{T}}\) in the form
where \({{X}_{k}} = {{\left[ {\begin{array}{*{20}{c}} {X_{k}^{{(1)}}}&{X_{k}^{{(2)}}} \end{array}} \right]}^{T}}\)are the unknown true coordinates, and \({{\Delta }_{k}} = {{\left[ {\begin{array}{*{20}{c}} {\Delta _{k}^{{(1)}}}&{\Delta _{k}^{{(2)}}} \end{array}} \right]}^{T}}\) are their measurement errors. We also assume that a device measuring some geophysical field (gravimeter, magnetometer, or echo-sounder) and a corresponding digital map of for the navigation area are available.
The measurements can be written as
where the function ϕ(Xk) determines the dependence of the measured parameter on the coordinates, and \(\Delta y_{k}^{\Sigma }\) are the total measurement errors of the map and the device.
Consider the solution to the problem within the so called invariant statement. Substituting Xk with \(\left( {y_{k}^{{NS}} - {{\Delta }_{k}}} \right)\) in (16), we can formulate the problem of filtering the errors Δk by measurements
In this statement, NS measurements are treated as the known input signals [54, 55].
Consider the simplest version of the problem, where NS errors are considered constant during the correction, and the total error of the map and the measuring device is a discrete white noise.
It follows from the above that it is needed to estimate the state vector
by the measurements
We also consider that the vector Δ and errors \({{v}_{i}}\) are independent of each other and the following suppositions are true for them:
Then the following formula for the posterior density can be easily written [53]:
where Yk = (y1, y2, … yk)T, c is the normalizing constant.
Next we demonstrate how the proposed procedure helps answer the question: at which initial uncertainty of the vehicle position do the algorithms based on Gaussian approximation prove efficient?
By the algorithm efficiency we mean providing the specified consistency and accuracy factors, here set to 10%.
Note that within the formulated problem it is actually not that difficult to implement the optimal estimator, because NS errors are constant, and, therefore, the posterior density has a rather simple form (21). The accuracy needed to calculate the optimal estimate can be easily achieved by simply increasing the number of nodes in the point mass method or the number of particles in Monte Carlo method [56]. At the same time, as known, for example from [5], for the problem considered within the invariant approach the covariance matrix calculated in the EKF will achieve the CRBL computed for the fixed true trajectory set by coordinates Xi, \(i = \overline {1.k} \). Thus, RMS error calculated in the EKF can be used as \(\sqrt {{{{[G_{k}^{*}]}}_{{i,i}}}} \) for calculating the accuracy factor, which simplifies the predictive simulation procedure. Note that the derivatives for the function sk(Δ) required to determine the CRLB are computed at the points Xi, \(i = \overline {1.k} \). The need to fix the trajectory in accuracy analysis is a consequence of formulating and solving the aiding problem within the invariant approach [5, 8, 53, 55]. In this approach, the final coordinate estimate obtained by correcting the navigation system readings \(y_{k}^{{NS}}\)with the estimates \(\hat {\Delta }_{k}^{{}}\): \(\hat {y}_{k}^{{NS}} = y_{k}^{{NS}} - {{\hat {\Delta }}_{k}}\) agrees with the non-Bayesian trajectory estimate [8].
Suppose that the map-aided navigation problem is solved using the gravity field data and data from EGM 2008 gravity map [52]. The gravity anomaly isolines and the vehicle trajectory are presented in Fig. 3. The accuracy, consistency, probability, and complexity factors were calculated by averaging over 10 000 samples. We used σΣ = 0.5 mGal, Δt = 1 min; trajectory was fixed, velocity along the trajectory V = 5 m/s; coordinates of the start point X1S = 21.5 km, X2S = 21.5 km; coordinates of the end point X1E = 14.5 km, X2K = 10.5 km; distance between the measurements δ = 300 m; parameter σΔ was successively set to σΔ = 0.4, 0.5, 0.6, and 0.7 km.
First we demonstrate the simulation results for σΔ = 0.4 km. As in the example above, the blue curve is EKF, and violet curve is UKF. The plots of factor \(\xi _{{ik}}^{\mu }\) vs. the number of measurements are presented in Fig. 4. In all the following figures, 1 is EKF, and 3 is UKF, as in the methodical example.
As follows from the results, RMS errors of all coordinates differ from the CRLB by max 8% after the aiding for both suboptimal algorithms. The plots of factor \(\varsigma _{{ik}}^{\mu }\) vs. the number of measurements are presented in Fig. 5.
The calculated consistency factors demonstrate that with σΔ = 0.4 km for both EKF and UKF the difference between the real and estimated variances for each state vector component does not exceed 6%. The plots of factor \(\rho _{{ik}}^{\mu }\) vs. the number of measurements are presented in Fig. 6.
Along with the factor curves, Fig. 6 shows the confidence probability ρΓ = 0.997 for the Gaussian distribution with a red dashed line. It can be easily seen that the probabilities of estimation error falling within the \( \pm 3\sigma \) interval are close to ρΓ.
The computational complexity of the considered algorithms was compared using the processor from the previous example, and the comparison was made with respect to EKF. The value TUKF ≈ 5 was obtained in calculations. It means that the UKF is ≈500% more computationally complex than the EKF. This difference is mostly caused by a computationally intensive interpolation rule for calculating the field values using cubic splines, which is applied in the UKF in the transformation of each sigma point. Note that the UKF complexity is often considered to be close to the EKF. In some publications, high computational burden of the UKF is covered, and special techniques reducing the number of operations are proposed to decrease it [59]. Generally, the ratio of the UKF and EKF complexities deserves a separate research and is outside the scope of this paper.
The factor curves for σΔ = 0.5, 0.6, and 0.7 km look similar, the calculated factors after the correction are given in Table 3.
In the Table, RMS errors lower than the CRLB are indicated with asterisk. This is theoretically impossible, so the negative factors \(\xi _{k}^{\mu }\) are explained by the computational errors estimated to be 5%. The red cells contain the values not meeting the efficiency requirements of the suboptimal algorithm.
With σΔ = 0.4 km, both algorithms meet the 10% accuracy and consistency factors and can be considered efficient. With σΔ = 0.5 km, only the UKF meets the accuracy and consistency requirements, while the EKF is generally no longer effective, though meeting both requirements when estimating \(\Delta _{k}^{{(1)}}\). With σΔ = 0.6 km, the UKF still performs properly and starts to fail with σΔ = 0.7 km.
The EKF is based on linearization, so the above algorithm comparison with the proposed procedure helps determine the prior uncertainty of the vehicle initial position allowing linearization and assess the efficiency of the algorithms based on Gaussian approximation.
Before drawing the final conclusions, the following notes should be made.
(1) The algorithm comparison included calculation of the factors (9), (10) used to analyze the accuracy and consistency individually for each state vector component. In general, a need to calculate the factors characterizing the accuracy and consistency simultaneously for several components may arise. The probability of components falling within the preset interval can be easily calculated. As to the factors (9), (10), in the simplest case it can be done, for example, with the covariance matrix trace. However, this issue should be considered in more detail in a separate survey.
(2) It should be noted that quantitative comparison of RMS errors of various algorithms, and of the estimated and real RMS errors for the same algorithm is not a new thing. However, as already said in Introduction, this comparison is at the best made in terms of accuracy. The proposed procedure also allows comparison in terms of consistency and computational complexity.
(3) The described procedure is intended for comparing the nonlinear estimation algorithms designed within the stochastic approach. It should be mentioned, however, that there exist a range of estimators based on deterministic approach, which does not assume the random signals and measurement errors. These include, for example, ellipsoid method, graph optimization method, dynamic regressor extension method, etc. [26–30, 57]. These algorithms should also be compared with others, including stochastic algorithms. Generally, deterministic algorithms do not provide the estimated accuracy characteristics and this type of comparison is a difficult issue, nevertheless, as to accuracy comparison, the proposed procedure can be definitely applied in this case, too. The same refers to the so called robust algorithms [60].
(4) Undoubtedly, the algorithm efficiency is best judged by its performance in problems with real data. And with this, the results obtained with the proposed procedure based on predictive simulation make it possible to analyze the accuracy dependence on different conditions, which can improve the efficiency of tests in real conditions, particularly, making them less time- and resource-consuming.
6 CONCLUSIONS
This paper describes a procedure for assessing the efficiency and comparing nonlinear filtering algorithms based on predictive simulation, where the factors allowing quantitative comparison of the algorithms in terms of accuracy, consistency, and computational complexity are calculated. The accuracy factor characterizes the decrease of the algorithm RMS estimation error for each state vector component compared to that of a basic algorithm or the CRLB. The consistency factor characterizes the difference between the real and estimated RMS errors for each component, and complexity factor, the algorithm running time compared to the simplest algorithm. Apart from these factors, the procedure can also be used to calculate the probability of error falling within a preset interval or domain.
The procedure is explained by the example of nonlinear filtering of a second-order Markov sequence described with nonlinear equations by linear measurements. The extended and unscented Kalman filters, and the second-order polynomial filter have been compared. In the considered example, all the suboptimal algorithms meet the 10% accuracy and consistency factors. At the same time, the EKF provides the lowest probability of error falling within the three sigma interval—0.97, which is much lower than 0.997 probability, corresponding to the Gaussian law, provided by the other algorithms. As to the computational expenses, the EKF is 60% less complex than the PF2, and 400% less complex than the UKF.
The application of the procedure is demonstrated by the example of comparing and assessing the efficiency of the EKF and UKF for map-aided navigation problem. By the algorithm efficiency we mean providing the specified consistency factor and accuracy factor calculated compared to the CRLB. Initial uncertainty of the vehicle position defined by the standard deviation of each coordinate, at which the algorithms prove efficient, has been determined. It has been shown that for the studied example the EKF is efficient under initial uncertainty of 400 m. The UKF performs well with the 600 m uncertainty; however, it is almost six times more computationally complex than the EKF.
REFERENCES
Chelpanov, I.B., Optimal’naya obrabotka signalov v navigatsionnykh sistemakh (Optimal Signal Processing in Navigation Systems), Moscow: Nauka, 1967.
Mathematical System Theory: The Influence of R.E. Kalman, Antoulas, A.C., Ed., Berlin: Springer-Verlag, 1991. https://doi.org/10.1007/978-3-662-08546-2
Dmitriev, S.P., Vysokotochnaya morskaya navigatsiya (High-Precision Marine Navigation), St. Petersburg: Sudostroenie, 1991.
Yarlykov, M.S. and Mironov, M.A., Markovskaya teoriya otsenivaniya sluchainykh protsessov (Markov Theory of Estimating Random Processes), Moscow: Radio i svyaz’, 1993.
Stepanov, O.A., Primenenie teorii nelineinoi fil’tratsii v zadachakh obrabotki navigatsionnoi informatsii (Nonlinear Filtering Theory as Applied to Navigation Data Processing), CSRI Elektropribor, St. Petersburg, 1998.
Bar–Shalom, Y., Li, X., and Kirubarajan, T., Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software, New York: Wiley–Interscience, 2001.
Brown, R.G. and Hwang, P.Y.C., Introduction to Random Signals and Applied Kalman Filtering with Matlab Exercises, John Wiley, 2012, 4rd edition.
Stepanov, O.A., Osnovy teorii otsenivaniya s prilozheniyami k zadacham obrabotki navigatsionnoi informatsii. Part 1. Vvedenie v teoriyu otsenivaniya (Fundamentals of the Estimation Theory with Applications to the Problems of Navigation Information Processing. Part 1. Introduction to the Estimation Theory), Concern CSRI Elektropribor, JSC, St. Petersburg, 2017, 3rd edition.
Shakhtarin, B.I., Nelineinaya optimal’naya fil’tratsiya v primerakh i zadachakh (Nonlinear Optimal Filtering. Examples and Tasks), Moscow, 2008.
Dunik, J., Biswas, S.K., Dempster, A.G., Pany, T., and Closas, P., State estimation methods in navigation: Overview and application, IEEE Aerospace and Electronics Systems Magazine, 2020, vol. 12, no. 35, pp. 16–31. https://doi.org/10.1109/MAES.2020.3002001
Kalman, R.E., A new approach to linear filtering and prediction problems, Transactions ASME, Series D, Journal of Basic Engineering, 1960, vol. 82, no. 1, pp. 35–45.
Stratonovich, R.L., Conditional Markov processes, Theory of Probability and Its Applications, 1960, vol. 5, no. 2. https://doi.org/10.1137/1105015
Stratonovich, R.L., Application of Markov process theory to optimal signal filtering, Radiotekhnika i elektronika, 1960, vol. 5, no. 11, pp. 1751–1763.
Jazwinski, A.H., Stochastic Process and Filtering Theory, New York: Academic Press, 1970.
Gelb, A., Applied Optimal Estimation, Cambridge: M.I.T. Press, 1974.
Gibbs, B.P., Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook, John Wiley&Sons, Inc., 2011. https://doi.org/10.1002/9780470890042
Simon, D., Optimal State Estimation: Kalman, H∞ and Nonlinear Approaches, New Jersey, NJ: John Wiley & Sons, Inc., 2006. https://doi.org/10.1002/0470045345
Basin, M., New Trends in Optimal Filtering and Control for Polynomial and Time-Delay Systems, Springer, 2008. https://doi.org/10.1007/978-3-540-70803-2
Särkkä, S., Bayesian Filtering and Smoothing, Cambridge University Press, 2013. https://doi.org/10.1017/CBO9781139344203
Rybakov, K.A., Statisticheskie metody analiza i fil’tratsii v nepreryvnykh stokhasticheskikh sistemakh (Statistical Analysis and Filtering Methods in Continuous-Time Stochastic Systems), Moscow: MAI, 2017.
Bolotin, Yu.V., Bragin, A.V., and Gulevskii, D.V., Studying the consistency of Extended Kalman Filter in pedestrian navigation with foot-mounted SINS, Gyroscopy and Navigation, 2021, vol. 12, no. 2, pp. 155–165. https://doi.org/10.1134/S2075108721020024
Bragin, A.V., and Bolotin, Yu.V., Novel aiding algorithm for autonomous pedestrian navigation, Proc. 30th St. Petersburg International Conference on Integrated Navigation Systems, 2023.
Rudenko, E.A. Autonomous path estimation for a descent vehicle using recursive Gaussian filters, J. Computer and Systems Sciences International, 2018, vol. 57, no. 5, pp. 695–712. https://doi.org/10.1134/S1064230718050131
Stepanov, O. and Motorin, A., Performance criteria for the identification of inertial sensor error models, Sensors, 2019, vol. 19, no. 9. https://doi.org/10.3390/s19091997
Koshaev, D.A., Multiple model algorithm for single-beacon navigation of autonomous underwater vehicle without its a priori position. Part 2. Simulation, Gyroscopy and Navigation, 2020, vol. 11, no. 3, pp. 319–332. https://doi.org/10.1134/S2075108720040069
Bobtsov, A., Yi, B., Ortega, R., and Astolfi, A., Generation of new exciting regressors for consistent on-line estimation of unknown constant parameters, IEEE Transactions on Automatic Control, 2022, vol. 67, no. 9, pp. 4746–4753. https://doi.org/10.1109/TAC.2022.3159568
Bobtsov, A.A, Nikolaev, N.A., and Ortega, R., A new class of observers of dynamic system state variables based on parametric identification, 15 multikonferentsiya po problemam upravleniya (15th Multiconference on Control Problems), 2022.
Matasov, A.I., Convex analysis methods for solving the estimation problems, 15 multikonferentsiya po problemam upravleniya (15th Multiconference on Control Problems), 2022.
Khlebnikov, M.V., Sparse filtering under exogenous disturbances, 15 multikonferentsiya po problemam upravleniya (15th Multiconference on Control Problems), 2022.
Shiryaev, V.I., Khadanovich, D.V., Podivilova, E.O., and Prokhorova, D.O., Guaranteed estimation algorithms and their implementation in real time, 15 multikonferentsiya po problemam upravleniya (15th Multiconference on Control Problems), 2022.
Stepanov, O.A., Litvinenko, Yu. A., Vasiliev, V.A., Toropov, A.B., and Basin, M.V., Polynomial filtering algorithm applied to navigation data processing under quadratic nonlinearities in system and measurement equations. Part 1. Description and comparison with Kalman type algorithms, Gyroscopy and Navigation, 2021, vol. 12, no. 3, pp. 205–223. https://doi.org/10.1134/S2075108721030068
Stepanov, O.A. and Isaev, A.M., Comparative analysis of the effectiveness of estimation algorithms in problems of processing navigation information based on predictive simulation, Proc. XVI Vserossiiskaya mul’tikonferentsiya po problemam upravleniya MKPU-2023 (16th All-Russian Multiconference on Control Problems MKPU-2023), pp. 219–222.
Dmitriev, S.P. and Shimelevich, L.I., Nelineinye zadachi obrabotki navigatsionnoi informatsii (Nonlinear Problems of Navigation Information Processing), Leningrad: TsNII RUMB, 1977.
Chen, Z., Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond, Adaptive Systems Lab., McMaster University, Hamilton, Canada, 2003.
Ristic, B., Arulampalam, S., and Gordon, N., Beyond the Kalman Filter: Particle Filter for Tracking Applications, Artech House Radar Library, 2004.
Doucet, A., Freitas, N., and Gordon, N., Sequential Monte Carlo Methods in Practice, New York, NY: Springer New York, 2001. https://doi.org/10.1007/978-1-4757-3437-9
Berkovskii, N.A. and Stepanov, O.A., Error of calculating the optimal Bayesian estimate using the Monte Carlo method in nonlinear problems, J. Computer and Systems Sciences International, 2013, vol. 52, no. 3., pp. 342–353. https://doi.org/10.1134/S1064230713010036
Stepanov, O.A. and Koshaev, D.A., Universal Matlab programs for analyzing the potential accuracy and sensitivity of linear nonstationary filtering algorithms, Giroskopiya i Navigatsiya, 2004, no. 2, pp. 81–92.
Koshaev, D. A., Stepanov, O. A., Application of the Rao-Cramér inequality in nonlinear estimation problems, J. Computer and Systems Sciences International, 1997, vol. 36, no. 2.
Koshaev, D.A. Comparison of accuracy lower bounds, Teoriya i sistemy upravleniya. Izvestiya RAN, 1998, no. 2, pp. 62–65.
Van Trees, H.L. and Bell, K.L., Bayesian Bounds for Parameter Estimation and Nonlinear Filtering Tracking, San-Francisco: Wiley—IEEE Press, 2007. https://doi.org/10.1109/9780470544198
Stepanov, O.A., Vasiliev, V.A., Basin, M.V., Tupysev, V.A., and Litvinenko, Y.A., Efficiency analysis of polynomial filtering algorithms in navigation data processing for a class of nonlinear discrete dynamical systems, IET Control Theory & Applications, 2020, vol. 15, no. 3, pp. 248–559. https://doi.org/10.1049/cth2.12036
Stepanov O.A. and Vasil’ev, V.A., Cramer–Rao lower bound in nonlinear filtering problems under noises and measurement errors dependent on estimated parameters, Automation and Remote Control, 2016, vol. 77, no. 1, pp. 81–105. https://doi.org/10.1134/S0005117916010057
Julier, S.J., Uhlmann, J.K., and Durrant-Whyte, H.F., A new approach for filtering nonlinear systems, in Proc. IEEE American Control Conference, 1995, pp. 1628–1632.
Julier, S.J. and Uhlmann, J. K., A new extension of the Kalman filter to nonlinear systems, in Proc. AeroSense: The 11th International Symposium on Aerospace/Defence Sensing, Simulation and Controls, 1997.
Julier, S.J. and Uhlmann, J.K., Unscented filtering and nonlinear estimation, Proc. IEEE, 2004, vol. 92, no. 3, pp. 401–422. https://doi.org/10.1109/JPROC.2003.823141
Gustafsson, F. and Hendeby, G., Some relations between extended and unscented Kalman filters, Signal Processing, IEEE Transactions, 2012, vol. 60, no. 2, pp. 545–555. https://doi.org/10.1109/TSP.2011.2172431
Stepanov, O.A., Litvinenko, Yu.A., and Isaev, A.M., Third-order polynomial filter in the problem of estimating a scalar Markov process from nonlinear measurements, Matematicheskoe modelirovanie, komp’yuternyi i naturnyi eksperiment v estestvennykh naukakh, 2022, no. 4.
Stepanov O.A. and Toropov A.B., Nonlinear filtering for map-aided navigation. Part 2. Trends in the algorithm development. Gyroscopy and Navigation, 2016, vol. 7, no. 1, pp. 82–89. https://doi.org/10.1134/S2075108716010132
Methods and Technologies for Measuring the Earth’s Gravity Field Parameters, Peshekhonov, V.G. and Stepanov, O.A., Eds., Springer, 2022. https://doi.org/10.1007/978-3-031-11158-7
Melo, J. and Matos, A., Survey on advances on terrain based navigation for autonomous underwater vehicles, Ocean Engineering, Jul. 2017, vol. 139, pp. 250–264. https://doi.org/10.1016/j.oceaneng.2017.04.047
Stepanov, O.A., Vasil’ev, V.A., and Toropov, A.B., Map-aided navigation algorithms taking into account the variability of position errors of the corrected navigation system, Proc. 29 th St. Petersburg International Conference on Integrated Navigation Systems, 2022.
Stepanov, O.A. and Toropov, A.B., Nonlinear filtering for map-aided navigation. Part 1. An overview of algorithms, Gyroscopy and Navigation, 2015, vol. 6, no. 4, pp. 324–337. https://doi.org/10.1134/S2075108715040148
Krasovskii, A.A., Beloglazov, I.N., and Chigin, G.P., Teoriya korrelyatsionno-ekstremal’nykh navigatsionnykh sistem (Theory of Correlation-Extremal Navigation Systems), Moscow: Nauka, 1979.
Stepanov, O.A., Optimal and sub-optimal filtering in integrated navigation systems, Chapter 8 in Aerospace Navigation Systems, Nebylov A. and Watson, J., Eds., Wiley, 2016. https://doi.org/10.1002/9781119163060.ch8
Stepanov, O.A. and Toropov, A.B., Using sequential Monte Carlo methods in the correlation-extremal navigation problem, Izvestiya vuzov. Priborostroenie, 2010, vol. 53, no. 10, pp. 49–54.
Loeliger, H.-A., An introduction to factor graphs, in IEEE Signal Processing Magazine, Jan. 2004, vol. 21, no. 1, pp. 28–41. https://doi.org/10.1109/MSP.2004.1267047
Besekerskii, V.A. and Nebylov, A.V., Robastnye sistemy avtomaticheskogo upravleniya (Robust Automatic Control Systems), Moscow: Nauka, 1983.
Belov, R.V., Klyapnev, D.A., and Ogorodnikov, K.O., A method for reducing the computational complexity of the unscented Kalman filter, Trudy NGTU im. R.Ye. Alekseyeva, 2019, no. 1, pp. 17–23.
Nebylov, A.V., Loparev, A.V. and Nebylov, V.A., Methods for robust filtering based on numerical characteristics of input processes in solving problems of navigation information processing and motion control, Gyroscopy and Navigation, 2022, vol. 13, no. 3, pp. 170–179. https://doi.org/https://doi.org/10.1134/S2075108722030063
ACKNOWLEDGMENTS
The authors express their gratitude to Yu.V. Bolotin, Е.А. Rudenko, and D.А. Koshaev for meaningful comments made during the discussion of the paper.
Funding
The research was supported by the Russian Science Foundation, project no. 23-19-00626, https://rscf.ru/project/23-19-00626/.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
The authors of this work declare that they have no conflicts of interest
Additional information
Publisher’s Note.
Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Cite this article
Stepanov, O.A., Isaev, A.M. A Procedure of Comparative Analysis of Recursive Nonlinear Filtering Algorithms in Navigation Data Processing Based on Predictive Simulation. Gyroscopy Navig. 14, 213–224 (2023). https://doi.org/10.1134/S2075108723030094
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S2075108723030094