Skip to main content

Limiting Kalman Filter

  • 3853 Accesses

Abstract

In this chapter, we consider the special case where all known constant matrices are independent of time.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-47612-4_6
  • Chapter length: 20 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   44.99
Price excludes VAT (USA)
  • ISBN: 978-3-319-47612-4
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   59.99
Price excludes VAT (USA)
Hardcover Book
USD   79.99
Price excludes VAT (USA)
Fig. 6.1

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles K. Chui .

Exercises

Exercises

  1. 6.1.

    Prove that the estimate \(\tilde{\mathbf {x}}_{k-1}\) in (6.7) is an unbiased estimate of \(\mathbf {x}_{k-1}\) in the sense that \(E(\tilde{\mathbf {x}}_{k-1})=E(\mathbf {x}_{k-1})\).

  2. 6.2.

    Verify that

    $$\begin{aligned} \frac{d}{ds}A^{-1}(s)=-A^{-1}(s)\left[ \frac{d}{ds}A(s)\right] A^{-1}(s) . \end{aligned}$$
  3. 6.3.

    Show that if \(\lambda _{min}\) is the smallest eigenvalue of P, then \( P\ge \lambda _{min}I\). Similarly, if \(\lambda _{max}\) is the largest eigenvalue of P then \(P\le \lambda _{max}I\).

  4. 6.4.

    Let F be an \(n\times n\) matrix. Suppose that all the eigenvalues of F are of absolute value less than 1. Show that \(F^{k}\rightarrow 0\) as \( k\rightarrow \infty \).

  5. 6.5.

    Prove that for any \(n\times n\) matrices A and B,

    $$\begin{aligned} (A+B)(A+B)^{\top }\le 2(AA^{\top }+BB^{\top })\ . \end{aligned}$$
  6. 6.6.

    Let \(\{\underline{\xi }_{k}\}\) and \(\{\underline{\eta }_{k}\}\) be sequences of zero-mean Gaussian white system and measurement noise processes, respectively, and \(\vec {\mathbf {x}}_{k}\) be defined by (6.4). Show that

    $$\begin{aligned} \langle \mathbf {x}_{k-1}-\vec {\mathbf {x}}_{k-1},\ \underline{\xi }_{k-1}\rangle =0 \end{aligned}$$

    and

    $$\begin{aligned} \langle \mathbf {x}_{k-1}-\vec {\mathbf {x}}_{k-1},\ \underline{\eta }_{k}\rangle =0. \end{aligned}$$
  7. 6.7.

    Verify that for the Kalman gain \(G_{k}\), we have

    $$\begin{aligned} -(I-G_{k}C)P_{k, k-}{}_{1}C^{\top }G_{k}^{\top }+G_{k}R_{k}G_{k}^{\top }=0. \end{aligned}$$

    Using this formula, show that

    $$\begin{aligned} P_{k, k}&=(I-G_{k}C)AP_{k-1, k-1}A^{\top }(I-G_{k}C)^{\top }\\&\quad +(I-G_{k}C){{\Gamma }} Q_{k}{{\Gamma }}^{\top }(I-G_{k}C)^{\top }+G_{k}RG_{k}^{\top }. \end{aligned}$$
  8. 6.8.

    By imitating the proof of Lemma 6.8, show that all the eigenvalues of \((I-GC)A\) are of absolute value less than 1.

  9. 6.9.

    Let \(\underline{\epsilon }_{k}=\hat{\mathbf {x}}_{k}-\vec {\mathbf {x}}_{k}\) where \(\vec {\mathbf {x}}_{k}\) is defined by (6.4), and let \(\underline{\delta }_{k}= \mathbf {x}_{k}-\hat{\mathbf {x}}_{k}\). Show that

    $$\begin{aligned}&\langle \underline{\epsilon }_{k-1}, \underline{\xi }_{k-1}\rangle =0,&\quad \langle \underline{\epsilon }_{k-1}, \underline{\eta }_{k}\rangle =0,\\&\langle \underline{\delta }_{k-1},\ \underline{\xi }_{k-1}\rangle =0,&\quad \langle \underline{\delta }_{k-1},\ \underline{\eta }_{k}\rangle =0, \end{aligned}$$

    where \(\{\underline{\xi }_{k}\}\) and \(\{\underline{\eta }_{k}\}\) are zero-mean Gaussian white system and measurement noise processes, respectively.

  10. 6.10.

    Let

    $$\begin{aligned} B_{j}=\langle \underline{\epsilon }_{j},\ \underline{\delta }_{j}\rangle A^{\top }C^{\top },\qquad j=0, 1,\ \cdots , \end{aligned}$$

    where \(\underline{\epsilon }_{j}=\hat{\mathbf {x}}_{j}-\vec {\mathbf {x}}_{j}\), \(\underline{\delta }_{j}=\mathbf {x}_{j}-\hat{\mathbf {x}}_{j}\), and \(\vec {\mathbf {x}}_{j}\) is defined by (6.4). Prove that \(B_{j}\) are componentwise uniformly bounded.

  11. 6.11.

    Derive formula (6.41).

  12. 6.12.

    Derive the limiting (or steady-state) Kalman filtering algorithm for the scalar system:

    $$\begin{aligned} \left\{ \begin{array}{rl} x_{k+1}&{}=ax_{k}+\gamma \xi _{k}\\ v_{k}&{}=cx_{k}+\eta _{k}, \end{array}\right. \end{aligned}$$

    where \(a,\ \gamma \), and c are constants and \(\{\xi _{k}\}\) and \(\{\eta _{k}\}\) are zero-mean Gaussian white noise sequences with variances q and r, respectively.

Rights and permissions

Reprints and Permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Chui, C.K., Chen, G. (2017). Limiting Kalman Filter. In: Kalman Filtering. Springer, Cham. https://doi.org/10.1007/978-3-319-47612-4_6

Download citation