Skip to main content

Orthogonal Projection and Kalman Filter

  • 4543 Accesses

Abstract

The elementary approach to the derivation of the optimal Kalman filtering process discussed in Chap. 2 has the advantage that the optimal estimate \(\hat{\mathbf {x}}_{k}=\hat{\mathbf {x}}_{k|k}\) of the state vector \(\mathbf {x}_{k}\) is easily understood to be a least-squares estimate of \(\mathbf {x}_{k}\) with the properties that (i) the transformation that yields \(\hat{\mathbf {x}}_{k}\) from the data \(\overline{\mathbf {v}}_{k}=[\mathbf {v}_{0}^{\top }\cdots \mathbf {v}_{k}^{\top }]^{\top }\) is linear, (ii) \(\hat{\mathbf {x}}_{k}\) is unbiased in the sense that \(E(\hat{\mathbf {x}}_{k})=E(\mathbf {x}_{k})\), and (iii) it yields a minimum variance estimate with \((Var(\overline{\underline{\epsilon }}_{k, k}))^{-1}\) as the optimal weight.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   69.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles K. Chui .

Exercises

Exercises

  1. 3.1.

    Let \(A\ne 0\) be a non-negative definite and symmetric constant matrix. Show that \(\mathrm {tr}\, A>0\). (Hint: Decompose A as \(A=BB^{\top }\) with \(B\ne 0\).)

  2. 3.2.

    Let

    $$\begin{aligned} \hat{\mathbf {e}}_{j}=C_{j}(\mathbf {x}_{j}-\hat{\mathbf {y}}_{j-1})=C_{j}\left( \mathbf {x}_{j}-\sum _{i=0}^{j-1}\hat{P}_{j-1, i}\mathbf {v}_{i}\right) , \end{aligned}$$

    where \(\hat{P}_{j-1, i}\) are some constant matrices. Use Assumption 2.1 to show that

    $$\begin{aligned} \langle \underline{\eta }_{\ell },\ \hat{\mathbf {e}}_{j}\rangle =O_{q\times q} \end{aligned}$$

    for all \(\ell \ge j\).

  3. 3.3.

    For random vectors \(\mathbf {w}_{0},\cdots , \mathbf {w}_{r}\), define

    $$\begin{aligned}&Y(\mathbf {w}_{0}, \cdots , \mathbf {w}_{r})\\ =&\left\{ \mathbf {y}:\mathbf {y}=\sum _{i=0}^{r}P_{i}\mathbf {w}_{i},\quad P_{0}, \cdots , P_{r},\ constant\ matrices\right\} . \end{aligned}$$

    Let

    $$\begin{aligned} \mathbf {z}_{j}=\mathbf {v}_{j}-C_{j}\sum _{i=0}^{j-1}\hat{P}_{j-1, i}\mathbf {v}_i \end{aligned}$$

    be defined as in (3.4) and \(\mathbf {e}_{j}=\Vert \mathbf {z}_{j}\Vert ^{-1}\mathbf {z}_{j}\). Show that

    $$\begin{aligned} Y(\mathbf {e}_{0}, \cdots ,\mathbf {e}_{k})=Y(\mathbf {v}_{0}, \cdots ,\mathbf {v}_{k}). \end{aligned}$$
  4. 3.4.

    Let

    $$\begin{aligned} \hat{\mathbf {y}}_{j-1}=\sum _{i=0}^{j-1}\hat{P}_{j-1, i}\mathbf {v}_i \end{aligned}$$

    and

    $$\begin{aligned} \mathbf {z}_{j}=\mathbf {v}_{j}-C_{j}\sum _{i=0}^{j-1}\hat{P}_{j-1, i}\mathbf {v}_i. \end{aligned}$$

    Show that

    $$\begin{aligned} \langle \hat{\mathbf {y}}_{j},\ \mathbf {z}_{k}\rangle =O_{n\times q},\qquad j=0, 1, \cdots , k-1. \end{aligned}$$
  5. 3.5.

    Let \(\mathbf {e}_{j}\) be defined as in Exercise 3.3. Also define

    $$\begin{aligned} \check{\mathbf {x}}_{k}=\sum _{i=0}^{k}\langle \mathbf {x}_{k},\ \mathbf {e}_{i}\rangle \mathbf {e}_{i} \end{aligned}$$

    as in (3.10). Show that

    $$\begin{aligned}\begin{gathered} \langle \mathbf {x}_{k},\ \underline{\xi }_{k}\rangle =O_{n\times n},\qquad \langle \check{\mathbf {x}}_{k|k},\ \underline{\xi }_{j}\rangle =O_{n\times n},\\ \langle \mathbf {x}_{k},\ \underline{\eta }_{j}\rangle =O_{n\times q},\qquad \langle \check{\mathbf {x}}_{k-1|k-1},\ \underline{\eta }_{k}\rangle =O_{n\times q}, \end{gathered}\end{aligned}$$

    for \(j=0, 1, \cdots , k\).

  6. 3.6.

    Consider the linear deterministic/stochastic system

    $$\begin{aligned} \left\{ \begin{array}{rcl} \mathbf {x}_{k+1}&{}=&{}A_{k}\mathbf {x}_k+B_{k}\mathbf {u}_{k}+\mathrm {\Gamma }_{k}\underline{\xi }_{k}\\ \mathbf {v}_{k}&{}=&{}C_{k}\mathbf {x}_{k}+D_{k}\mathbf {u}_{k}+\underline{\eta }_{k}, \end{array}\right. \end{aligned}$$

    where \(\{\mathbf {u}_{k}\}\) is a given sequence of deterministic control input m-vectors, \(1\le m\le n\). Suppose that Assumption 2.1 is satisfied. Derive the Kalman filtering algorithm for this model.

  7. 3.7.

    Consider a simplified radar tracking model where a large-amplitude and narrow-width impulse signal is transmitted by an antenna. The impulse signal propagates at the speed of light c, and is reflected by a flying object being tracked. The radar antenna receives the reflected signal so that a time-difference \(\mathrm {\Delta } t\) is obtained. The range (or distance) d from the radar to the object is then given by \(d=c\mathrm {\Delta } t/2\). The impulse signal is transmitted periodically with period h. Assume that the object is traveling at a constant velocity w with random disturbance \(\xi \sim N(0, q)\), so that the range d satisfies the difference equation

    $$\begin{aligned} d_{k+1}=d_{k}+h(w_{k}+\xi _{k}). \end{aligned}$$

    Suppose also that the measured range using the formula \(d= c\mathrm {\Delta } t/2\) has an inherent error \(\mathrm {\Delta } d\) and is contaminated with noise \(\eta \) where \(\eta \sim N(0, r)\), so that

    $$\begin{aligned} v_{k}=d_{k}+\mathrm {\Delta } d_{k}+\eta _{k}. \end{aligned}$$

    Assume that the initial target range is \(d_{0}\) which is independent of \(\xi _{k}\) and \(\eta _{k}\), and that \(\{\xi _{k}\}\) and \(\{\eta _{k}\}\) are also independent (cf. Fig. 3.2). Derive a Kalman filtering algorithm as a range-estimator for this radar tracking system.

  8. 3.8.

    A linear stochastic system for radar tracking can be described as follows. Let \(\mathrm {\Sigma }\), \(\mathrm {\Delta } A\), \(\mathrm {\Delta } E\) be the range, the azimuthal angular error , and the elevational angular error , respectively, of the target, with the radar being located at the origin (cf. Fig. 3.3). Consider \(\mathrm {\Sigma },\mathrm {\Delta } A\), and \(\mathrm {\Delta } E\) as functions of time with first and second derivatives denoted by \(\dot{\mathrm {\Sigma }}\), \(\mathrm {\Delta }\dot{A}\), \(\mathrm {\Delta }\dot{E}\), \(\ddot{\mathrm {\Sigma }}\), \(\mathrm {\Delta }\ddot{A}\), \(\mathrm {\Delta }\ddot{E}\), respectively. Let \(h>0\) be the sampling time unit and set \(\mathrm {\Sigma }_{k}=\mathrm {\Sigma }(kh)\), \(\dot{\mathrm {\Sigma }}_{k}=\dot{\mathrm {\Sigma }}(kh)\), \(\ddot{\mathrm {\Sigma }}_{k}=\ddot{\mathrm {\Sigma }}(kh)\), etc. Then, using the second degree Taylor polynomial approximation , the radar tracking model takes on the following linear stochastic state-space description:

    $$\begin{aligned} \left\{ \begin{array}{rcl} \mathbf {x}_{k+1}&{}=&{}\tilde{A}\mathbf {x}_{k}+\mathrm {\Gamma }_{k}\underline{\xi }_{k}\\ \mathbf {v}_{k}&{}=&{}\tilde{C}\mathbf {x}_{k}+\underline{\eta }_{k}, \end{array}\right. \end{aligned}$$

    where

    $$\begin{aligned} \mathbf {x}_{k}=&[\mathrm {\Sigma }_{k}\ \dot{\mathrm {\Sigma }}_{k}\ \ddot{\mathrm {\Sigma }}_{k}\ \mathrm {\Delta } A_{k}\ \mathrm {\Delta }\dot{A}_{k}\ \mathrm {\Delta }\ddot{A}_{k}\ \mathrm {\Delta } E_{k}\ \mathrm {\Delta }\dot{E}_{k}\ \mathrm {\Delta }\ddot{E}_{k}]^{\top },\\ \tilde{A}=&\left[ \begin{array}{ccccccccc} 1 &{} h &{} h^{2}/2 &{} &{} &{} &{} &{} &{} \\ 0 &{} 1 &{} h &{} &{} &{} &{} &{} &{} \\ 0 &{} 0 &{} 1 &{} &{} &{} &{} &{} &{} \\ &{} &{} &{} 1 &{} h &{} h^{2}/2 &{} &{} &{} \\ &{} &{} &{} 0 &{} 1 &{} h &{} &{} &{} \\ &{} &{} &{} 0 &{} 0 &{} 1 &{} &{} &{} \\ &{} &{} &{} &{} &{} &{} 1 &{} h &{} h^{2}/2\\ &{} &{} &{} &{} &{} &{} 0 &{} 1 &{} h\\ &{} &{} &{} &{} &{} &{} 0 &{} 0 &{} 1 \end{array}\right] ,\\&\tilde{C}= \left[ \begin{array}{ccccccccc} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \end{array}\right] , \end{aligned}$$

    and \(\{\underline{\xi }_{k}\}\) and \(\{\underline{\eta }_{k}\}\) are independent zero-mean Gaussian white noise sequences with \(Var(\underline{\xi }_{k})=Q_{k}\) and \(Var(\underline{\eta }_{k})=R_{k}\). Assume that

    $$\begin{aligned}\begin{gathered} \mathrm {\Gamma }_{k}=\left[ \begin{array}{ccc} \mathrm {\Gamma }_{k}^{1} &{} &{} \\ &{} \mathrm {\Gamma }_{k}^{2} &{} \\ &{} &{} \mathrm {\Gamma }_{k}^{3} \end{array}\right] ,\\ Q_{k}=\left[ \begin{array}{ccc} Q_{k}^{1} &{} &{} \\ &{} Q_{k}^{2} &{} \\ &{} &{} Q_{k}^{3} \end{array}\right] ,\qquad R_{k}=\left[ \begin{array}{ccc} R_{k}^{1} &{} &{} \\ &{} R_{k}^{2} &{} \\ &{} &{} R_{k}^{3} \end{array}\right] , \end{gathered}\end{aligned}$$

    where \(\mathrm {\Gamma }_{k}^{i}\) are \(3\times 3\) submatrices, \(Q_{k}^{i}\), \(3\times 3\) non-negative definite symmetric submatrices, and \(R_{k}^{i}\), \(3\times 3\) positive definite symmetric submatrices, for \(i=1, 2, 3\). Show that this system can be decoupled into three subsystems with analogous state-space descriptions.

Rights and permissions

Reprints and Permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Chui, C.K., Chen, G. (2017). Orthogonal Projection and Kalman Filter. In: Kalman Filtering. Springer, Cham. https://doi.org/10.1007/978-3-319-47612-4_3

Download citation