Abstract
This chapter is devoted to a most elementary introduction to the Kalman filtering algorithm. By assuming invertibility of certain matrices, the Kalman filtering “prediction-correction” algorithm will be derived based on the optimality criterion of least-squares unbiased estimation of the state vector with the optimal weight, using all available data information. The filtering algorithm is first obtained for a system with no deterministic (control) input. By superimposing the deterministic solution, we then arrive at the general Kalman filtering algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Exercises
Exercises
-
2.1
Let
$$\begin{aligned} \underline{\overline{\epsilon }}_{k, j}=\left[ \begin{array}{c} \underline{\epsilon }_{k, 0}\\ \vdots \\ \underline{\epsilon }_{k, j} \end{array}\right] \qquad and\qquad \underline{\epsilon }_{k,\ell }=\underline{\eta }_{\ell }-C_{\ell }\sum _{i=\ell +1}^{k}\mathrm {\Phi }_{\ell i}\mathrm {\Gamma }_{i-1}\underline{\xi }_{i-1}, \end{aligned}$$where \(\{\underline{\xi }_{k}\}\) and \(\{\underline{\eta }_{k}\}\) are both zero-mean Gaussian white noise sequences with \(Var(\underline{\xi }_{k})=Q_{k}\) and \(Var(\underline{\eta }_{k})=R_{k}\). Define \(W_{k, j}=(Var(\underline{\overline{\epsilon }}_{k, j}))^{-1}\). Show that
$$\begin{aligned} W_{k, k-1}^{-1}=\begin{bmatrix} R_{0}&0\\&\ddots&\\ 0&R_{k-1} \end{bmatrix} +Var \begin{bmatrix} C_{0}\sum \nolimits _{i=1}^{k}\mathrm {\Phi }_{0i}\mathrm {\Gamma }_{i-1}\underline{\xi }_{i-1}\\ \vdots \\ C_{k-1}\mathrm {\Phi }_{k-1, k}\mathrm {\Gamma }_{k-1}\underline{\xi }_{k-1} \end{bmatrix} \end{aligned}$$and
$$\begin{aligned} W_{k, k}^{-1}=\begin{bmatrix} W_{k, k-1}^{-1}&0\\ 0&R_{k} \end{bmatrix}. \end{aligned}$$ -
2.2
Show that the sum of a positive definite matrix A and a non-negative definite matrix B is positive definite.
-
2.3
Let \(\underline{\overline{\epsilon }}_{k, j}\) and \(W_{k, j}\) be defined as in Exercise 2.1. Verify the relation
$$\begin{aligned} \underline{\overline{\epsilon }}_{k, k-1}=\underline{\overline{\epsilon }}_{k-1, k-1}-H_{k, k-1}\mathrm {\Gamma }_{k-1}\underline{\xi }_{k-1} \end{aligned}$$where
$$\begin{aligned} H_{k, j}=\left[ \begin{array}{c} C_{0}\mathrm {\Phi }_{0k}\\ \vdots \\ C_{j}\mathrm {\Phi }_{jk} \end{array}\right] , \end{aligned}$$and then show that
$$\begin{aligned} W_{k, k-1}^{-1}=W_{k-1, k-1}^{-1}+H_{k-1, k-1}\mathrm {\Phi }_{k-1, k}\mathrm {\Gamma }_{k-1}Q_{k-1}\mathrm {\Gamma }_{k-1}^{\top }\mathrm {\Phi }_{k-1, k}^{\top }H_{k-1, k-1}^{\top }. \end{aligned}$$ -
2.4
Use Exercise 2.3 and Lemma 1.2 to show that
$$\begin{aligned} W_{k, k-1}&=W_{k-1, k-1}-W_{k-1, k-1}H_{k-1, k-1}\mathrm {\Phi }_{k-1, k}\mathrm {\Gamma }_{k-1}(Q_{k-1}^{-1}\\&\quad +\mathrm {\Gamma }_{k-1}^{\top }\mathrm {\Phi }_{k-1, k}^{\top }H_{k-1, k-1}^{\top }W_{k-1, k-1}H_{k-1, k-1}\mathrm {\Phi }_{k-1, k}\mathrm {\Gamma }_{k-1})^{-1}\\&\quad \cdot \mathrm {\Gamma }_{k-1}^{\top }\mathrm {\Phi }_{k-1, k}^{\top }H_{k-1, k-1}^{\top }W_{k-1, k-1}. \end{aligned}$$ -
2.5
Use Exercise 2.4 and the relation \(H_{k, k-1}=H_{k-1, k-1}\mathrm {\Phi }_{k-1, k}\) to show that
$$\begin{aligned}&H_{k, k-1}^{\top }W_{k, k-1}\\ =&\mathrm {\Phi }_{k-1, k}^{\top }\{I-H_{k-1, k-1}^{\top }W_{k-1, k-1}H_{k-1, k-1}\mathrm {\Phi }_{k-1, k}\mathrm {\Gamma }_{k-1}(Q_{k-1}^{-1}\\&+\mathrm {\Gamma }_{k-1}^{\top }\mathrm {\Phi }_{k-1, k}^{\top }H_{k-1, k-1}^{\top }W_{k-1, k-1}H_{k-1, k-1}\mathrm {\Phi }_{k-1, k}\mathrm {\Gamma }_{k-1})^{-1}\\&\cdot \mathrm {\Gamma }_{k-1}^{\top }\mathrm {\Phi }_{k-1, k}^{\top }\}H_{k-1, k-1}^{\top }W_{k-1, k-1}. \end{aligned}$$ -
2.6
Use Exercise 2.5 to derive the identity:
$$\begin{aligned}&(H_{k, k-1}^{\top }W_{k, k-1}H_{k, k-1})\mathrm {\Phi }_{k, k-1}(H_{k-1, k-1}^{\top }W_{k-1, k-1}H_{k-1, k-1})^{-1}\\&\cdot H_{k-1, k-1}^{\top }W_{k-1, k-1}=H_{k, k-1}^{\top }W_{k, k-1}. \end{aligned}$$ -
2.7
Use Lemma 1.2 to show that
$$\begin{aligned} P_{k, k-1}C_{k}^{\top }(C_{k}P_{k, k-1}C_{k}^{\top }+R_{k})^{-1}=P_{k, k}C_{k}^{\top }R_{k}^{-1}=G_{k}. \end{aligned}$$ -
2.8
Start with \(P_{k, k-1}=(H_{k, k-1}^{\top }W_{k, k-1}H_{k, k-1})^{-1}\). Use Lemma 1.2, (2.8), and the definition of \(P_{k, k}=(H_{k, k}^{\top }W_{k, k}H_{k, k})^{-1}\) to show that
$$\begin{aligned} P_{k, k-1}=A_{k-1}P_{k-1, k-1}A_{k-1}^{\top }+\mathrm {\Gamma }_{k-1}Q_{k-1}\mathrm {\Gamma }_{k-1}^{\top }. \end{aligned}$$ -
2.9
Use (2.5) and (2.2) to prove that
$$\begin{aligned} E(\mathbf {x}_{k}-\hat{\mathbf {x}}_{k|k-1})(\mathbf {x}_{k}-\hat{\mathbf {x}}_{k|k-1})^{\top }=P_{k, k-1} \end{aligned}$$and
$$\begin{aligned} E(\mathbf {x}_{k}-\hat{\mathbf {x}}_{k|k})(\mathbf {x}_{k}-\hat{\mathbf {x}}_{k|k})^{\top }=P_{k, k}. \end{aligned}$$ -
2.10
Consider the one-dimensional linear stochastic dynamic system
$$\begin{aligned} x_{k+1}=ax_{k}+\xi _{k},\qquad x_{0}=0, \end{aligned}$$where \(E(x_{k})=0\), \(Var(x_{k})=\sigma ^{2}\), \(E(x_{k}\xi _{j})=0\), \(E(\xi _{k})=0\), and \(E(\xi _{k}\xi _{j})=\mu ^{2}\delta _{kj}\). Prove that \(\sigma ^{2}=\mu ^{2}/(1-a^{2})\) and \(E(x_kx_{k+j})= a^{|j|}\sigma ^{2}\) for all integers j.
-
2.11
Consider the one-dimensional stochastic linear system
$$\begin{aligned} \left\{ \begin{array}{rcl} x_{k+1} &{} = &{} x_{k}\\ v_{k} &{} = &{} x_{k}+\eta _{k} \end{array}\right. \end{aligned}$$with \(E(\eta _{k})=0\), \(Var(\eta _{k})=\sigma ^{2}\), \(E(x_{0})=0\) and \(Var(x_{0})=\mu ^{2}\). Show that
$$\begin{aligned} \left\{ \begin{array}{l} \hat{x}_{k|k}=\hat{x}_{k-1|k-1}+\frac{\mu ^{2}}{\sigma ^{2}+k\mu ^{2}}(v_{k}-\hat{x}_{k-1|k-1})\\ \hat{x}_{0|0}=0 \end{array}\right. \end{aligned}$$and that \(\hat{x}_{k|k}\rightarrow c\) for some constant c as \(k\rightarrow \infty \).
-
2.12
Let \(\{\mathbf {v}_{k}\}\) be a sequence of data obtained from the observation of a zero-mean random vector \(\mathbf {y}\) with unknown variance Q. The variance of \(\mathbf {y}\) can be estimated by
$$\begin{aligned} \hat{Q}_{N}=\frac{1}{N}\sum _{k=1}^{N}(\mathbf {v}_{k}\mathbf {v}_{k}^{\top }). \end{aligned}$$Derive a prediction-correction recursive formula for this estimation.
-
2.13
Consider the linear deterministic/stochastic system
$$\begin{aligned} \left\{ \begin{array}{rcl} \mathbf {x}_{k+1}&{}=&{}A_{k}\mathbf {x}_k+B_{k}\mathbf {u}_{k}+\mathrm {\Gamma }_{k}\underline{\xi }_{k}\\ \mathbf {v}_{k}&{}=&{}C_{k}\mathbf {x}_{k}+D_{k}\mathbf {u}_{k}+\underline{\eta }_{k}, \end{array}\right. \end{aligned}$$where \(\{\mathbf {u}_{k}\}\) is a given sequence of deterministic control input m-vectors, \(1\le m\le n\). Suppose that Assumption 2.1 is satisfied and the matrix \(Var(\underline{\overline{\epsilon }}_{k, j})\) is nonsingular (cf. (2.2) for the definition of \(\underline{\overline{\epsilon }}_{k, j}\)). Derive the Kalman filtering equations for this model.
-
2.14
In digital signal processing, a widely used mathematical model is the following so-called ARMA (autoregressive moving-average) process:
$$\begin{aligned} \mathbf {v}_{k}=\sum _{i=1}^{N}B_{i}\mathbf {v}_{k-i}+\sum _{i=0}^{M}A_{i}\mathbf {u}_{k-i}, \end{aligned}$$where the \(n\times n\) matrices \(B_{1}, \cdots , B_{N}\) and the \(n \times q\) matrices \(A_{0}, A_{1}, \cdots , A_{M}\) are independent of the time variable k, and \(\{\mathbf {u}_{k}\}\) and \(\{\mathbf {v}_{k}\}\) are input and output digital signal sequences, respectively (cf. Fig. 2.3). Assuming that \(M\le N\), show that the input-output relationship can be described as a state-space model
$$\begin{aligned} \left\{ \begin{array}{rcl} \mathbf {x}_{k+1} &{} = &{} A\mathbf {x}_{k}+B\mathbf {u}_{k}\\ \mathbf {v}_{k} &{} = &{} C\mathbf {x}_{k}+D\mathbf {u}_{k} \end{array}\right. \end{aligned}$$with \(\mathbf {x}_{0}=0\), where
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Chui, C.K., Chen, G. (2017). Kalman Filter: An Elementary Approach. In: Kalman Filtering. Springer, Cham. https://doi.org/10.1007/978-3-319-47612-4_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-47612-4_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-47610-0
Online ISBN: 978-3-319-47612-4
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)