Introduction

The integration of a global positioning system (GPS) with the inertial navigation system (INS) has been extensively applied to kinematic applications in the past few decades. The estimation environment in the case of GPS/INS kinematic applications is often subject to change. Hence the adaptive Kalman filter (KF) technique, instead of the fixed KF, has been widely employed in the GPS/INS integrated navigation system (Yang and Xu 2003; Yang and Gao 2006; Lin 2015). The conventional adaptive KF can fulfill the accuracy requirements in many kinematic applications. There are, however, always some applications where the accuracy requirements cannot be fulfilled, such as the precise engineering and cadastral fields (Mohamed and Schwarz 1999; Leick et al. 2015; Yang et al. 2001). Therefore, it is necessary to develop an adaptive filtering algorithm which has a better overall performance.

The performance of KF depends on the dynamic model that reveals the behavior of the state variables and the stochastic models that describe the noise properties (Bar-Shalom et al. 2001; Niehen 2004). For these two aspects, there are also two approaches to the adaptive Kalman filtering problem, which are the multiple-model-based adaptive estimation (MMAE) (Hide et al. 2004; Li and Jilkov 2005; Lan et al. 2011; Jin et al. 2015) and innovation-based adaptive estimation (IAE) (Mehra 1972; Mohamed and Schwarz 1999; Wang et al. 2000), respectively. The former utilizes a bank of Kalman filters running in parallel under different dynamic models and statistical information, and combines the estimates of all the models with different non-zero model probability. In the latter case, the adaptation is done directly by the statistical information, i.e., the measurement noise and/or the process noise covariance matrixes, based on the changes in the innovation sequence.

In the MMAE and IAE approaches, the discrete-time differential models, e.g., the constant velocity (CV) model and constant-acceleration (CA) model, are generally employed to describe the behavior of the state variables (Bar-Shalom et al. 2001; Li and Jilkov 2003). However, the state variables in these models, i.e., position, velocity, and attitude, are correlated in practice. It is difficult to exactly describe the statistical relationship of these states, and thus insufficiently known a priori statistics will lead to an inadequate estimation of the observable components through the coupling effect in the filter (Mohamed and Schwarz 1999). Another disadvantage of the KF based on the discrete-time differential model is its high dependence on a priori knowledge of the potentially unstable process and measurement noise statistics. Conceptually, an accurate a priori knowledge of the process and measurement information depends on factors such as the process dynamics and the type of application, which are generally difficult to obtain. Insufficiently known a priori filter statistics will reduce the precision of the estimated filter states or introduce biases to the estimates, and even lead to practical divergence of the filter (Ding et al. 2007). The research on the adaptive KF mostly focuses on computing the process or measurement noise covariance (Mohamed and Schwarz 1999; Ding et al. 2007; Niehen 2004; Yang and Gao 2006), while there are seldom reports on the adaptive dynamic model at the present time. An effective model will certainly facilitate the extraction of the useful information about the vehicle states from the observations to a great extent.

Aiming at the issues mentioned above, an adaptive KF based on the autoregressive (AR) predictive model is proposed. The major contributions of this research are as follows: (1) The AR model is incorporated into the KF for state estimation. The closed-form solution of the AR model coefficients can be derived from a convex quadratic programming. The degrees of freedom of the AR model can not only satisfy the polynomial constraint of the state variable, but also reduce the noise by the criterion of minimizing the mean-square error (MMSE). (2) Based on the KF with the AR predictive model (KF-AR), an innovation-based adaptive approach is improved. In the proposed adaptive algorithm, the process noise covariance is computed using the information of innovation sequence. The adaptive KF-AR can utilize the real-time information adequately.

Methodology

The dynamic model is a discrete-time motion model of the form

$$\varvec{x}_{k + 1} = \varvec{F}_{k + 1|k} \varvec{x}_{k} + \varvec{w}_{k}$$
(1)

where \(\varvec{x}_{k}\) denotes an M × 1 state vector at epoch t k . \(\varvec{F}_{k + 1|k}\) is the M × M state transition matrix, and the process noise \(\varvec{w}_{k}\) is a zero-mean Gaussian random process with the covariance matrix \(\varvec{Q}_{k}\), i.e., \(\varvec{w}_{k} \sim N\left( {0,\varvec{Q}_{k} } \right)\).

With position-only measurements, the measurement vector of vehicle state at epoch t k is given by

$$\varvec{z}_{k} = \varvec{Hx}_{k} + \varvec{v}_{k}$$
(2)

where the measurement matrix \(\varvec{H} = \left[ {\begin{array}{*{20}c} 1 & 0 & \cdots & 0 \\ \end{array} } \right]_{1 \times M}\). The measurement noise v k is a zero-mean Gaussian random process, independent of w k , with the covariance matrix R k , i.e., v k  ~ N(0, R k ).

AR predictive model

The traditional discrete-time differential model can definitely depict the vehicle kinematic motion. However, it is fixed and cannot adjust adaptively to the process and measurement noise intensities, resulting in a performance reduction to some extent. For this problem, the AR model is incorporated into the KF to estimate the vehicle’s state.

From polynomial model to AR model

According to the Weierstrass approximation theorem (Pérez and Quintana 2008), any continuous motion trajectory can be approximated by a polynomial of a certain degree to an arbitrary accuracy. As such, it is possible to model the vehicle motion by an Nth-degree polynomial in the Cartesian coordinates. The CV and CA models are special cases (for N = 1, 2, respectively) of this general Nth-degree model (Bar-Shalom et al. 2001). For example, in CV model the state vector is \(\varvec{x}_{k}^{\text{CV}} = \left[ {\begin{array}{*{20}c} {r_{k} } & {\dot{r}_{k} } \\ \end{array} } \right]^{\text{T}}\), where r k is the position, and \(\dot{r}_{k}\) is the velocity. The state transition matrix in CV model is (Challa et al. 2011)

$$\varvec{F}_{k + 1|k}^{\text{CV}} = \left[ {\begin{array}{*{20}c} 1 & T \\ 0 & 1 \\ \end{array} } \right]$$
(3)

where T is the sampling interval. The covariance matrix of the process noise \(\varvec{w}_{k}^{\text{CV}}\) is

$$\varvec{Q}_{k}^{\text{CV}} = q_{v} \left[ {\begin{array}{*{20}c} {{{T^{3} } \mathord{\left/ {\vphantom {{T^{3} } 3}} \right. \kern-0pt} 3}} & {{{T^{2} } \mathord{\left/ {\vphantom {{T^{2} } 2}} \right. \kern-0pt} 2}} \\ {{{T^{2} } \mathord{\left/ {\vphantom {{T^{2} } 2}} \right. \kern-0pt} 2}} & T \\ \end{array} } \right]$$
(4)

where q v is the process noise intensity and controls the size of the deviations of the velocity.

Without loss of generality, after sampling uniformly, the position at epoch t k can be depicted as:

$$r_{k} = \sum\limits_{n = 0}^{N} {a_{n} \left( {t_{k} } \right)^{n} }$$
(5)

with a certain choice of the coefficients a n (n = 0, 1, …, N), where t k  = kT. Assume that the position r k+1 can be predicted by the AR model through the M latest samples r k , r k−1, …, r k+1−M (Väliviita et al. 1999):

$$r_{k + 1} = \sum\limits_{m = 1}^{M} {h_{m} r_{k + 1 - m} }$$
(6)

where h m (m = 1, 2 ,…, M) are the AR model coefficients. According to (5), r k+1 and r k+1−m can be written as follows:

$$r_{k + 1} = \sum\limits_{n = 0}^{N} {a_{n} \left( {k + 1} \right)^{n} T^{n} ,} \quad r_{k + 1 - m} = \sum\limits_{n = 0}^{N} {a_{n} \left( {k + 1 - m} \right)^{n} T^{n} }$$
(7)

Substituting (7) into (6) and simplifying the formula, we can obtain

$$\left( {k + 1} \right)^{n} = \sum\limits_{m = 1}^{M} {h_{m} \left( {k + 1 - m} \right)^{n} } ,\quad n = 0,1, \ldots ,N$$
(8)

When n = 0, Eq. (8) is

$$\sum\limits_{m = 1}^{M} {h_{m} } = 1$$
(9)

When n = 1, using (9) in (8) we can obtain

$$\sum\limits_{m = 1}^{M} {h_{m} m} = 0$$
(10)

When n = 2, using (9) and (10) in (8), we obtain

$$\sum\limits_{m = 1}^{M} {h_{m} m^{2} } = 0$$
(11)

In similarity, according to (10) and (11) we can infer that:

$$\sum\limits_{m = 1}^{M} {h_{m} m^{n} } = 0,\quad n = 1,2, \ldots ,N$$
(12)

The detailed proof of (12) is given in “Appendix 1”.

Equations (9) and (12) can be rewritten in the form of matrix as follows:

$$\varvec{Au}_{k} = \varvec{b}$$
(13)

where the AR model coefficients are denoted by \(\varvec{u}_{k} = \left[ {\begin{array}{*{20}c} {h_{1} } & {h_{2} } & \cdots & {h_{M} } \\ \end{array} } \right]^{\text{T}}\), \(\varvec{b} = [\begin{array}{*{20}c} 1 & 0 & \cdots & 0 \\ \end{array} ]^{\text{T}}\), and A is a Vandermonde matrix

$$\varvec{A} = \left[ {\begin{array}{*{20}c} 1 & 1 & \cdots & 1 \\ 1 & 2 & \cdots & M \\ 1 & {2^{2} } & \cdots & {M^{2} } \\ \vdots & \vdots & \ddots & \vdots \\ 1 & {2^{N} } & \cdots & {M^{N} } \\ \end{array} } \right]_{{\left( {N + 1} \right) \times M}}$$
(14)

where M is the number of AR model coefficients, and N is the highest degree of polynomial used to approximate the position.

When M = N + 1, the Vandermonde matrix A is nonsingular and (13) has the unique solution (Meyer 2000). So the vehicle motion can be depicted by (13) like the traditional discrete-time differential model, and the AR model coefficient vector is

$$\varvec{u}_{k} = \varvec{A}^{ - 1} \varvec{b}$$
(15)

For example, the AR model (N = 1, M = 2) is equivalent to the CV model, and it also can describe the constant velocity motion exactly (as shows in the simulation results).

When M > N+1, the Vandermonde matrix A has a full row rank and (13) is a non-consistent equation (Meyer 2000). Here, the AR model has a degree of redundancy, i.e., the AR model can not only satisfy the polynomial constraint of vehicle motion, but also reduce the noise with the extra degree of freedom. In the following, the optimal AR model is derived in the framework of KF by the criterion of MMSE.

Derivation of AR model in the framework of KF

According to (6), the state vector in the AR model includes the positions from time k to time k − M + 1, and it can be written as:

$$\varvec{x}_{k}^{\text{AR}} = \left[ {\begin{array}{*{20}c} {r_{k} } & {r_{k - 1} } & \ldots & {r_{k - M + 1} } \\ \end{array} } \right]^{\text{T}}$$
(16)

It is different from that in the traditional differential model. The state transition matrix in the AR model, \(\varvec{F}_{k + 1|k}^{\text{AR}}\), is defined by

$$\varvec{F}_{k + 1|k}^{\text{AR}} = \left[ {\begin{array}{*{20}c} {h_{1} } & {h_{2} } & \ldots & {h_{M - 1} } & {h_{M} } \\ 1 & 0 & \ldots & 0 & 0 \\ 0 & 1 & \ldots & 0 & 0 \\ 0 & 0 & \ddots & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ \end{array} } \right]_{M \times M}$$
(17)

where the AR model coefficients h m (m = 1, 2, …, M) to be optimized should satisfy the polynomial constraint of vehicle motion in (13). Accordingly, in the AR model the process noise is \(\varvec{w}_{k}^{\text{AR}} = \left[ {\begin{array}{*{20}c} {w_{k} } & {w_{k - 1} } & \ldots & {w_{k - M + 1} } \\ \end{array} } \right]^{\text{T}}.\) It is an independent identically distributed zero-mean white Gaussian sequence with the covariance matrix \(\varvec{Q}_{k}^{\text{AR}}\). Assume that the stochastic changes of the positions at the different epochs are mutually independent, the process noise covariance is

$$\varvec{Q}_{k}^{\text{AR}} = E\left[ {\varvec{w}_{k}^{\text{AR}} \left( {\varvec{w}_{k}^{\text{AR}} } \right)^{\text{T}} } \right] = q_{r} T \cdot \varvec{I}$$
(18)

where q r is the process noise intensity in terms of position, and I is an M-dimensional identity matrix.

Due to the dependence of the AR model on the coefficient vector u k , the Kalman recursive equations are also explicitly dependent on u k . Specifically,

$$\varvec{x}_{k|k - 1} \left( {\varvec{u}_{k} } \right) = \varvec{F}_{k|k - 1} \left( {\varvec{u}_{k} } \right) \cdot \varvec{x}_{k - 1|k - 1}$$
(19a)
$$\varvec{P}_{k|k - 1} \left( {\varvec{u}_{k} } \right) = \varvec{F}_{k|k - 1} \left( {\varvec{u}_{k} } \right) \cdot \varvec{P}_{k - 1|k - 1} \cdot \varvec{F}_{k|k - 1}^{\text{T}} \left( {\varvec{u}_{k} } \right) + \varvec{Q}_{k - 1}$$
(19b)
$$\varvec{S}_{k} \left( {\varvec{u}_{k} } \right) = \varvec{HP}_{k|k - 1} \left( {\varvec{u}_{k} } \right)\varvec{H}^{\text{T}} + \varvec{R}_{k}$$
(19c)
$$\varvec{K}_{k} \left( {\varvec{u}_{k} } \right) = \varvec{P}_{k|k - 1} \left( {\varvec{u}_{k} } \right) \cdot \varvec{H}^{\text{T}} \cdot \varvec{S}_{k}^{ - 1} \left( {\varvec{u}_{k} } \right)$$
(19d)
$$\varvec{x}_{k|k} \left( {\varvec{u}_{k} } \right) = \varvec{x}_{k|k - 1} \left( {\varvec{u}_{k} } \right) + \varvec{K}_{k} \left( {\varvec{u}_{k} } \right) \cdot \left[ {\varvec{z}_{k} - \varvec{Hx}_{k|k - 1} \left( {\varvec{u}_{k} } \right)} \right]$$
(19e)
$$\varvec{P}_{k|k} \left( {\varvec{u}_{k} } \right) = \left[ {\varvec{I} - \varvec{K}_{k} \left( {\varvec{u}_{k} } \right) \cdot \varvec{H}} \right] \cdot \varvec{P}_{k|k - 1} \left( {\varvec{u}_{k} } \right)$$
(19f)

where x k|k−1 and P k|k−1 are the a priori estimate and covariance matrix of the state vector, and x k|k and P k|k are the respective a posterior estimate and covariance matrix. S k is the innovation covariance, and K k is the filter gain matrix.

The estimate error variance of r k is the element at the first row and first column of the covariance matrix P k|k , so the objective function in the sense of MMSE can be expressed as (Jin et al. 2014):

$$\mathop {\text{minimize}}\limits_{{\varvec{u}_{k} }} \, \left( {\varvec{P}_{k|k} } \right)_{{\left( {1,1} \right)}}$$
(20)

and subject to \(\varvec{Au}_{k} = \varvec{b}\), where (·)(i,j) represents the (i, j)-entry of the matrix in the bracket. \(\left( {\varvec{P}_{k|k} } \right)_{(1,1)}\) can be obtained from (19c), (19d) and (19f) as follows:

$$\begin{aligned} \left( {\varvec{P}_{k|k} } \right)_{(1,1)} & = \left( {\varvec{P}_{k|k - 1} } \right)_{(1,1)} - \frac{{\left( {\left( {\varvec{P}_{k|k - 1} } \right)_{(1,1)} } \right)^{2} }}{{\left( {\varvec{P}_{k|k - 1} } \right)_{(1,1)} + \left( {\varvec{R}_{k} } \right)_{(1,1)} }} \\ & = \frac{{\left( {\varvec{R}_{k} } \right)_{(1,1)} \cdot \left( {\varvec{P}_{k|k - 1} } \right)_{(1,1)} }}{{\left( {\varvec{P}_{k|k - 1} } \right)_{(1,1)} + \left( {\varvec{R}_{k} } \right)_{(1,1)} }} \\ & = \frac{{\left( {{\mathbf{R}}_{k} } \right)_{(1,1)} }}{{1 + {{\left( {\varvec{R}_{k} } \right)_{(1,1)} } \mathord{\left/ {\vphantom {{\left( {\varvec{R}_{k} } \right)_{(1,1)} } {\left( {\varvec{P}_{k|k - 1} } \right)_{(1,1)} }}} \right. \kern-0pt} {\left( {\varvec{P}_{k|k - 1} } \right)_{(1,1)} }}}} \\ \end{aligned}$$
(21)

Since u k is independent of R k and Q k , the cost function (20) is equivalent to

$$\begin{aligned} \mathop {\text{minimize}}\limits_{{\varvec{u}_{k} }} \, \varvec{u}_{k}^{\text{T}} \varvec{P}_{k - 1|k - 1} \varvec{u}_{k} \hfill \\ {\text{subject to }}\varvec{Au}_{k} = \varvec{b} \hfill \\ \end{aligned}$$
(22)

The optimization problem (22) is a convex quadratic programming problem (Boyd and Vandenberghe 2004). It can be solved by Lagrange multiplier technique (Singiresu 2009). The closed-form solution of (22), i.e., the optimal AR model coefficients can be represented as:

$$\varvec{u}_{k}^{*} = \varvec{P}_{k - 1|k - 1}^{ - 1} \varvec{A}^{\text{T}} \left( {\varvec{AP}_{k - 1|k - 1}^{ - 1} \varvec{A}^{\text{T}} } \right)^{ - 1} \varvec{b}$$
(23)

The derivation of (23) is given in “Appendix 2”. According to (23), the optimal AR model coefficients utilize not only the information of the polynomial motion, but also the information of the variance/covariance of estimation error. Hence, the optimal AR model has another function for reducing the noise, while the traditional discrete-time differential model does not.

Development of adaptive Kalman filter based on optimal AR predictive model

Above, an optimal AR model is derived in the framework of KF by the criterion of MMSE. When the polynomial degree of the dynamic model and the noise statistic property are a priori known, the KF-AR can work well (as the simulation shows). However, the vehicle motion is possibly time varying in the actual situation, and the noise statistic property is unstable. It requires the adaptive estimation techniques to deal with the problem, such as the MMAE and IAE. The MMAE has its application in the design of controller for the flexible vehicle tracking problems (Li and Jilkov 2005; Lan et al. 2011; Jin et al. 2015). The IAE is more applicable to INS/GPS systems used in the geomatics field (Mohamed and Schwarz 1999; Wang et al. 2000). Here, we only discuss the IAE approach based on the KF-AR.

Taking the application situations and computational complexity into account, we utilize the covariance-matching technique (Mehra 1972) to deal with the fluctuation of vehicle motion or the maneuver of different levels. The basic idea behind the covariance-matching technique is to make the residuals consistent with their theoretical covariance. As shown in Fig. 1, the actual innovation covariance is computed by the measurement minus the predicted state, and then is employed to compute the process noise covariance Q k or the measurement noise covariance R k . Finally, the Kalman filter can use the statistic information which is computed online.

Fig. 1
figure 1

Covariance-matching technique in adaptive Kalman filter algorithm

According to (19d), we can obtain

$$\varvec{HP}_{k|k - 1} = \varvec{S}_{k} \varvec{K}_{k}^{\text{T}}$$
(24)

so that (19f) can be expressed as:

$$\varvec{P}_{k|k} = \varvec{P}_{k|k - 1} - \varvec{K}_{k} \varvec{S}_{k} \varvec{K}_{k}^{\text{T}}$$
(25)

Substituting (19b) in (25), we obtain

$$\varvec{P}_{k|k} = \varvec{F}_{k|k - 1} \varvec{P}_{k - 1|k - 1} \varvec{F}_{k|k - 1}^{\text{T}} + \varvec{Q}_{k} - \varvec{K}_{k} \varvec{S}_{k} \varvec{K}_{k}^{\text{T}}$$
(26)

So that the process noise covariance is

$$\varvec{Q}_{k} = \varvec{P}_{k|k} - \varvec{F}_{k|k - 1} \varvec{P}_{k - 1|k - 1} \varvec{F}_{k|k - 1}^{\text{T}} + \varvec{K}_{k} \varvec{S}_{k} \varvec{K}_{k}^{\text{T}}$$
(27)

and can be approximated by

$$\varvec{Q}_{k} = \varvec{K}_{k} \varvec{S}_{k} \varvec{K}_{k}^{\text{T}}$$
(28)

According to the covariance-matching principle, the theoretical innovation covariance, S k , can be replaced by the actual one, so that the process noise covariance can be adapted as follows:

$$\varvec{Q}_{k} = \varvec{K}_{k} \hat{\varvec{S}}_{k} \varvec{K}_{k}^{\text{T}}$$
(29)

where \(\hat{\varvec{S}}_{k}\) is obtained by averaging the previous residual sequence over a window:

$$\hat{\varvec{S}}_{k} = \frac{1}{W}\sum\limits_{i = 0}^{W - 1} {\varvec{d}_{k - i} \varvec{d}_{k - i}^{\text{T}} }$$
(30)

where \(\varvec{d}_{k} = \varvec{z}_{k} - \varvec{Hx}_{k|k - 1}\) is the innovation residual from the KF-AR, and the correct window size W also needs to be identified to obtain the correct balance between the filter adaptivity and stability. Considering the situation when the epoch k is less than window length W, we define

$$\hat{\varvec{S}}_{k} = \left\{ {\begin{array}{*{20}c} {\frac{k - 1}{k}\hat{\varvec{S}}_{k - 1} + \frac{1}{k}\sum\limits_{i = 1}^{k} {\varvec{d}_{i} \varvec{d}_{i}^{\text{T}} } ,} & {k < W} \\ {\frac{1}{W}\sum\limits_{i = k - W + 1}^{k} {\varvec{d}_{i} \varvec{d}_{i}^{\text{T}} } ,} & {k \ge W} \\ \end{array} } \right.$$
(31)

A full derivation of the filter statistical information matrices is given by the maximum likelihood (ML) method in Mohamed and Schwarz (1999). Assumed that the measurement noise covariance R k is completely known, the explicit expression for Q k by the ML method is the same as (29). Hence, the results of the filter statistical information matrices by the covariance-matching technique are consistent with that by the ML method. The same strategy used for Q k can also be used to obtain an estimate of R k . According to (19c), the measurement noise covariance can be computed adaptively using the actual innovation covariance as follows:

$$\varvec{R}_{k} = \hat{\varvec{S}}_{k} - \varvec{HP}_{k|k - 1} \varvec{H}^{\text{T}}$$
(32)

where \(\hat{\varvec{S}}_{k}\) is also given by (31). For a detailed derivation of R k using the ML method, see Mohamed and Schwarz (1999). These equations result in a full variance/covariance matrix that attempts to model some of the inherent correlations.

According to the tests in Mohamed and Schwarz (1999), the error spectrum in the Q-only adaptive case is flatter than in the R-only and both Q and R adaptive cases. In other words, the Q-only adaptive algorithm is superior to the other two algorithms in the practical estimating performance. Assume that the measurement noise covariance is a priori known, and the process noise covariance is computed adaptively online, the proposed adaptive Kalman filtering algorithm based on optimal AR predictive model (AKF-AR) is given in Table 1.

Table 1 Adaptive Kalman filtering algorithm based on AR model (AKF-AR)

From the computational standpoint, the proposed algorithm adds the blocks of computing the state transition matrix and the process noise covariance into the traditional Kalman filter. Since the closed-form of AR model coefficients can be obtained, it is comparable with the traditional Kalman filter in the computational complexity. The proposed algorithm can modify the dynamic model in the meaning of MMSE to utilize the real-time information adequately. Hence, the proposed adaptive algorithm can suppress the noise better than the conventional adaptive KF.

Validation

The simulation experiment and field test have been carried out to evaluate the performance of the proposed model and algorithm. In the simulation, the AR model is compared with the traditional discrete-time differential model (CV model) in a one-dimensional constant velocity scene. In the field test, the AKF-AR competes with the traditional adaptive KF based on the CV model (AKF-CV) in a two-dimensional maneuvering situation.

Evaluating the performance of the AR model

Assume that a vehicle moves with the constant velocity v = 20 m/s. The measurement error variance of the vehicle position is in direct proportion to the signal-to-noise ratio (Tsui 2005) and the measurement noise variance is R = 100 m2, and the sampling interval T = 1 s. In the AR models, the polynomial degree is N = 1 for estimating the position, and the number of AR model coefficients is M = 2, 3, 4, respectively. The process noise covariance of the AR model is given in (18). To maintain consistency with the AR model, the process noise covariance in CV model is given by (Jin et al. 2015)

$$\varvec{Q}_{k}^{\text{CV}} = q_{r} T \cdot \left[ {\begin{array}{*{20}c} 1 & {{1 \mathord{\left/ {\vphantom {1 T}} \right. \kern-0pt} T}} \\ {{1 \mathord{\left/ {\vphantom {1 T}} \right. \kern-0pt} T}} & {{2 \mathord{\left/ {\vphantom {2 {T^{2} }}} \right. \kern-0pt} {T^{2} }}} \\ \end{array} } \right]$$
(33)

The two cases of parameter, q r matching (q r  = 0) and not matching (q r  = 0.1), with the actual vehicle motion are considered. The root-mean-square error (RMSE) comparisons of estimated position in two cases are shown in Figs. 2 and 3. The Monte-Carlo simulation with 1000 runs is carried out for a period of 100 s. The kinematic accuracy (the mean of the RMSE) for the CV and AR models in different cases of parameter setting is given in Table 2.

Fig. 2
figure 2

RMSE of estimated position for CV and AR models when q r  = 0

Fig. 3
figure 3

RMSE of estimated position for CV and AR models when q r  = 0.1

Table 2 Performance of KF-AR vs KF-CV for different parameters

From the simulation results above, we find that:

  1. 1.

    The AR model (N = 1, M = 2) has the same positioning accuracy as the CV model, since it is equivalent to the CV model as mentioned above. Both the models have the same degrees of freedom, and can depict the constant velocity motion equally.

  2. 2.

    The AR models (N = 1, M = 3) and (N = 1, M = 4) perform better than the CV model in the aspect of positioning accuracy. Because both the models not only satisfy the constraint of polynomial motion as the CV model, but also reduce the noise with the extra degrees of freedom.

  3. 3.

    The AR model (N = 1, M = 4) is superior to the AR models (N = 1, M = 2) and (N = 1, M = 3). Because the longer the length of the AR model coefficients, the more information the filter can utilize, and the higher the positioning accuracy of the KF-AR. However, the vehicle may maneuver at unknown times in practice, it is suggested that the appropriate length of the AR model coefficient is N + 1 ≤ M ≤ 5.

  4. 4.

    The comparison of Figs. 2 and 3 reveals that the optimal AR model performs the traditional discrete-time differential model much better when the parameter setting does not match with the actual vehicle motion. Because the AR model can adjust itself to the statistical characteristics of the noise and approach an optimal performance in that case.

Evaluating the performance of the AKF-AR

To evaluate the positioning accuracy of the proposed algorithm, a number of field tests were carried out. The device configuration in the tests is shown in Fig. 4. The GNSS RTK system was manufactured by HI-TARGET Surveying Instrument Co. Ltd. Its model is A10, and the horizontal positioning accuracy was (10 + 1 × 10−6 × D) mm, where D was the distance between the base and rover. The measurements obtained by the GNSS RTK system were considered as the truth reference. The transceiver UHF radio enabled the working mode to be switchable between the base and rover. The base was set on the roof of the Jidian Buiding at Northwest A&F University. It is labeled by the red dot in Fig. 5. The rover and the handheld GNSS receiver were installed on the roof of a car. The handheld GNSS receiver was manufactured by the UniStrong Science and Technology Co. Ltd., and its model is G130. The blue line in the Fig. 5 shows the trajectory of the car. The theoretical positioning accuracy of the handheld GNSS receiver is 3–5 m. The raw data collected by the handheld GNSS receiver were then post-processed by the different filtering algorithms in the software Matlab R2008b (Takasu and Yasuda 2008). The sampling rate of both the receivers was 1 Hz.

Fig. 4
figure 4

Device configuration in field test

Fig. 5
figure 5

Test trajectory for navigating accuracy evaluation

Since the tests focused on the performance evaluation of the AR model, the AKF-CV was chosen and compared with the AKF-AR in the post-processing procedure. For AKF-AR, we selected polynomial degree N = 1 and the model coefficients number M = 3. For the both compared algorithms, the process noise intensity was q r  = 0.01, the measurement error variance R = 100 m2, and the sliding window length was W = 50.

Figures 6 and 7 give the 2D position estimated error of AKF-AR and AKF-CV, respectively. It can be seen that the position errors of the AKF-AR (N = 1, M = 3) are much smaller than those of the AKF-CV, since the AKF-AR can reduce the noise as much as possible in the sense of MMSE during the straight-line driving motion of the vehicle and modify the dynamic model in real time with the real-time information during the tuning motion. The AKF-AR can utilize the online information through the innovation sequence adequately. Comparison of the figures reveals that the proposed algorithm performs better than the traditional one in the performance of reducing noise.

Fig. 6
figure 6

Estimated position error for AKF-AR

Fig. 7
figure 7

Estimated position error for AKF-CV

Table 3 shows the kinematic accuracy of the AKF-AR and AKF-CV algorithms for the different sampling intervals and window lengths. The AKF-AR performs better than the AKF-CV for the different sampling intervals at the same window length. The advantage of the proposed algorithm becomes more evident when the sampling interval is larger. Four different window lengths are chosen to analyze the performance of AKF-AR and AKF-CV. It also shows that the AKF-AR (N = 1, M = 3) performs better than the AKF-CV at the same sampling interval. However, the correct window size needs to be chosen according to the extent and frequency of maneuvering. The empirical value of W is common in the range of 50–200 for a sampling rate of 1 Hz.

Table 3 Kinematic accuracy of AKF-AR vs AKF-CV for different sampling intervals and window lengths

Conclusions

We incorporate the AR model into the KF for vehicle navigation, and its closed-form solution can be derived from a convex quadratic programming. The AR model not only satisfies the polynomial constraints of the state variable, but also reduces the noise by the criterion of MMSE with the extra degrees of freedom. An adaptive filtering algorithm, namely IAE, is improved based on KF-AR. The process noise covariance is computed using the real-time information of the innovation sequence.

The proposed algorithm can be applied to the single-state estimation before the information fusion in a loosely coupled GPS/INS system, or to the noise reducing in the post-processing procedure of the GPS receiver. Compared with the traditional algorithm, the proposed algorithm has some advantages as follows:

  1. 1.

    The KF-AR algorithm essentially filters the measurement data twice, i.e., first by the Finite Impulse Response (FIR) filter, and second by the Kalman filter. The coefficients of the FIR filter (AR model) are obtained in accordance with the principle of MMSE. So the denoising effect of the KF-AR algorithm is prior to the traditional one which filters the data once.

  2. 2.

    The KF-AR algorithm first approximates the vehicle trajectory by a polynomial of a certain degree, and then predicts the vehicle position using an AR model. The linearization technique can be applied to the other estimation problems of the non-linear variable.

  3. 3.

    The adaptive KF-AR algorithm can utilize the real-time information of the innovation sequence adequately, and its positioning accuracy for the maneuvering vehicle is higher than the traditional adaptive algorithm. The computing load of the proposed algorithm has not risen much, since the closed-form solution of the AR model coefficient can be readily obtained.