Skip to main content
Log in

A novel nonlinear filter through constructing the parametric Gaussian regression process

  • Review
  • Published:
Nonlinear Dynamics Aims and scope Submit manuscript

Abstract

In this paper, a new variational Gaussian regression filter (VGRF) is proposed by constructing the linear parametric Gaussian regression (LPGR) process including variational parameters. Through modeling the measurement likelihood by LPGR to implement the Bayesian update, the nonlinear measurement function will not be directly involved in the state estimation. The complex Monte Carlo computation used in traditional methods is also avoided well. Hence, in PVFF, the inference of state posteriori and variational parameters can be achieved tractably and simply by using variational Bayesian inference approach. Secondly, a filtering evidence lower bound (F-ELBO) is proposed as a quantitative evaluation rule of different filters. Compared with traditional methods, the higher estimation accuracy of VGRF can be explained by the F-ELBO. Thirdly, a relationship between F-ELBO and the monitored ELBO (M-ELBO) is found, i.e., F-ELBO is always larger than M-ELBO. Based on this finding, the accuracy performance improvement of VGRF can be theoretically explained. Finally, three numerical examples are employed to illustrate the effectiveness of VGRF.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Hu, Q., Ji, H., Zhang, Y.: A standard PHD filter for joint tracking and classification of maneuvering extended targets using random matrix. Signal Process. 144, 352–363 (2018)

    Article  Google Scholar 

  2. Dong, P., Jing, Z., Gong, D., Tang, B.: Maneuvering multi-target tracking based on variable structure multiple model GMCPHD filter. Signal Process. 141, 158–167 (2017)

    Article  Google Scholar 

  3. Zhang, X.M., Ren, K., Wan, M.J., Gu, G.H., Chen, Q.: Infrared small target tracking based on sample constrained particle filtering and sparse representation. Infrared Phys. Technol. 87, 72–82 (2017)

    Article  Google Scholar 

  4. Wang, F., Lin, B., Zhang, J., Li, X.: Object tracking using Langevin Monte Carlo particle filter and locality sensitive histogram based likelihood model. Comput. Graph. 70, 214–223 (2018)

    Article  Google Scholar 

  5. Farhad, M.G., Reza, F.M., Nader, G.: Target tracking with fast adaptive revisit time based on steady state IMM filter. Digital Signal Process. 69, 154–161 (2017)

    Article  Google Scholar 

  6. Afshari, H.H., Gadsden, S.A., Habibi, S.: Gaussian filters for parameter and state estimation: a general review of theory and recent trends. Signal Process. 135, 218–238 (2017)

    Article  Google Scholar 

  7. Zhang, C., Btepage, J., Kjellstrm, H., Mandt, S.: Advances in variational inference. IEEE Trans. Pattern Anal. Mach. Intell. 41(8), 2008–2026 (2018)

    Article  Google Scholar 

  8. Särkkä, S., Nummenmaa, A.: Recursive noise adaptive Kalman filtering by variational Bayesian approximations. IEEE Trans. Auto. Control 54(3), 596–600 (2009)

    Article  MathSciNet  Google Scholar 

  9. Dong, P., Jing, Z., Leung, H., Shen, K.: Variational Bayesian adaptive cubature information filter based on Wishart distribution. IEEE Trans. Auto. Control 62(11), 6051–6057 (2017). https://doi.org/10.1109/TAC.2017.2704442

    Article  MathSciNet  MATH  Google Scholar 

  10. Shen, K., Jing, Z., Dong, P.: A consensus nonlinear filter with measurement uncertainty in distributed sensor networks. IEEE Signal Process. Lett. 24(11), 1631–1635 (2017)

    Article  Google Scholar 

  11. Li, K., Chang, L., Hu, B.: A variational Bayesian-based unscented Kalman filter with both adaptivity and robustness. IEEE Sens. J. 16(18), 6966–6976 (2016)

    Article  Google Scholar 

  12. Huang, Y., Zhang, Y., Wu, Z., Li, N., Chambers, J.: A novel adaptive Kalman filter with inaccurate process and measurement noise covariance matrices. IEEE Trans. Auto. Control 63(2), 594–601 (2018). https://doi.org/10.1109/TAC.2017.2730480

    Article  MathSciNet  MATH  Google Scholar 

  13. Agamennoni, G., Nieto, J.I., Nebot, E.M.: Approximate inference in state-space models with heavy-tailed noise. IEEE Trans. Signal Process. 60(10), 5024–5037 (2012). https://doi.org/10.1109/TSP.2012.2208106

    Article  MathSciNet  MATH  Google Scholar 

  14. Taniguchi, A., Fujimoto, K., Nishida, Y.: On variational Bayes for identification of nonlinear state-space models with linearly dependent unknown parameters, in: 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), 2017, pp. 572–576

  15. Ma, Z., Rana, P.K., Taghia, J., Flierl, M., Leijon, A.: Bayesian estimation of Dirichlet mixture model with variational inference. Pattern Recog. 47(9), 3143–3157 (2014)

    Article  Google Scholar 

  16. Safarinejadian, B., Estahbanati, M.E.: A novel distributed variational approximation method for density estimation in sensor networks. Measurement 89, 78–86 (2016)

    Article  Google Scholar 

  17. Hua, J., Li, C.: Distributed variational Bayesian algorithms over sensor networks. IEEE Trans. Signal Process. 64(3), 783–798 (2016)

    Article  MathSciNet  Google Scholar 

  18. García-Fernández, Á.F., Svensson, L., Morelande, M.R., Särkkä, S.: Posterior linearization filter:principles and implementation using sigma points. IEEE Trans. Signal Process. 63(20), 5561–5573 (2015)

    Article  MathSciNet  Google Scholar 

  19. Gultekin, S., Paisleyi, J.: Nonlinear Kalman filtering with divergence minimization. IEEE Trans. Signal Process. 65(23), 6319–6331 (2017)

    Article  MathSciNet  Google Scholar 

  20. Tronarp, F., García-Fernández, Á.F., Särkkä, S.: Iterative filtering and smoothing in nonlinear and non-Gaussian systems using conditional moments. IEEE Signal Process. Lett. 25(3), 408–412 (2018)

    Article  Google Scholar 

  21. Hu, Y., Wang, X., Lan, H., Wang, Z., Moran, B., Pan, Q.: An iterative nonlinear filter using variational Bayesian optimization. Sensors 18(12), 4222 (2018)

    Article  Google Scholar 

  22. Li, T.-C., Su, J.-Y., Liu, W., Corchado, J.M.: Approximate gaussian conjugacy: parametric recursive filtering under nonlinearity, multimodality, uncertainty, and constraint, and beyond. Front. Inf. Technol. Electron. Eng. 18(12), 1913–1939 (2017)

    Article  Google Scholar 

  23. Doucet, A., Godsill, S., Andrieu, C.: On sequential monte carlo sampling methods for bayesian filtering. Stat. comput. 10(3), 197–208 (2000)

    Article  Google Scholar 

  24. Afshari, H.H., Gadsden, S.A., Habibi, S.: Gaussian filters for parameter and state estimation: a general review of theory and recent trends. Signal Process. 135, 218–238 (2017)

    Article  Google Scholar 

  25. Julier, S., Uhlmann, J., Durrant-Whyte, H.F.: A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Auto. Control 45(3), 477–482 (2000). https://doi.org/10.1109/9.847726

    Article  MathSciNet  MATH  Google Scholar 

  26. Arasaratnam, I., Haykin, S.: Cubature Kalman filters. IEEE Trans. Auto. Control 54(6), 1254–1269 (2009). https://doi.org/10.1109/TAC.2009.2019800

    Article  MathSciNet  MATH  Google Scholar 

  27. Nørgaard, M., Poulsen, N.K., Ravn, O.: New developments in state estimation for nonlinear system. Automatica 36(11), 1627–1638 (2000)

    Article  MathSciNet  Google Scholar 

  28. Jia, B., Xin, M., Cheng, Y.: Sparse-grid quadrature nonlinear filtering. Automatica 48(2), 327–341 (2012)

    Article  MathSciNet  Google Scholar 

  29. Mdl, V., Quinn, A.: The Variational Bayes Method in Signal Processing. Springer, Berlin (2006)

    Google Scholar 

  30. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer Science Business Media, Berlin (2006)

    MATH  Google Scholar 

  31. Constantinopoulos, C., Titsias, M.K., Likas, A.: Bayesian feature and model selection for Gaussian mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 28(6), 1013–1018 (2006)

    Article  Google Scholar 

  32. Sung, J., Ghahramani, Z., Bang, S.: Second-order latent-space variational bayes for approximate bayesian inference. IEEE Signal Process. Lett. 15, 918–921 (2008)

    Article  Google Scholar 

  33. Blei, D.M., Kucukelbir, A., McAuliffe, J.D.: Variational inference: a review for statisticians. J. Am. Stat. Assoc. 112(518), 859–877 (2017)

    Article  MathSciNet  Google Scholar 

  34. Murphy, K.P.: Machine Learning: A Probabilistic Perspective. MIT press, New York (2012)

    MATH  Google Scholar 

  35. Scardua, L.A., Jaime, J.: Complete offline tuning of the unscented kalman filter. Automatica 80, 54–61 (2017)

    Article  MathSciNet  Google Scholar 

  36. Jia, B., Xin, M., Cheng, Y.: High-degree cubature Kalman filter. Automatica 49(2), 510–518 (2013)

    Article  MathSciNet  Google Scholar 

  37. Singh, A.K., Bhaumik, S.: Transformed cubature quadrature Kalman filter. IET Signal Process. 11(9), 1095–1103 (2017). https://doi.org/10.1049/iet-spr.2017.0074

    Article  Google Scholar 

  38. Wang, X.X., Liang, Y., Pan, Q., Wang, Y.G.: Measurement random latency probability identification. IEEE Trans. Auto. Control 61(12), 4210–4216 (2016)

  39. Wang, X.X., Liang, Y., Pan, Q., Zhao, C.H., Yang, F.: Design and implementation of Gaussian filter for nonlinear system with randomly delayed measurements and correlated noises. Appl. Math. Comput. 232, 1011–1024 (2014)

    MathSciNet  MATH  Google Scholar 

  40. Wang, X.X., Liang, Y., Pan, Q., Yang, F.: A Gaussian approximation recursive filter for nonlinear systems with correlated noises. Automatica 48(9), 2290–2297 (2012)

    Article  MathSciNet  Google Scholar 

  41. Ranganath, S. G. R, Blei, D. M.: Black Box Variational Inferenc., In: International Conference on Artificial Intelligence and Statistics, Vol. 33, pp. 814–822 (2014)

  42. Fei, Z., Liu, K., Huang, B., Zheng, Y., Xiang, X.: Dirichlet process mixture model based nonparametric Bayesian modeling and variational inference. Chin. Auto. Cong. CAC 2019, 3048–3051 (2019)

  43. Xu, H., Duan, K.Q., Yuan, H.D., Xie, W.C., Wang, Y.L.: Black box variational inference to adaptive Kalman filter with unknown process noise covariance matrix. Signal Process. 169, 107413 (2020)

    Article  Google Scholar 

  44. Trusheim, F., Condurache, A., Mertins, A.: Boosting black-box variational inference by incorporating the natural gradient. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 19–24 (2018)

  45. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22(3), 400–407 (1951)

    Article  MathSciNet  Google Scholar 

  46. Hsieh, Chien-Shu , Chen, Fu-Chuang: Optimal solution of the two-stage kalman estimator. In: Proceedings of 1995 34th IEEE Conference on Decision and Control, Vol. 2, pp. 1532–1537. https://doi.org/10.1109/CDC.1995.480355 (1995)

  47. Bell, B.M., Cathey, F.W.: The iterated Kalman filter update as a Gauss-Newton method. IEEE Trans. Auto. Control 38(2), 294–297 (1993)

    Article  MathSciNet  Google Scholar 

  48. Arasaratnam, I., Haykin, S.: Cubature Kalman smoothers. Automatica 47(10), 2245–2250 (2011)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoxu Wang.

Ethics declarations

Conflicts of interest

We have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by the National Natural Science Foundation of China under Grants 61873208, 61573287, 61203234, 61135001, and 61374023, in part by the Shaanxi Natural Science Foundation of China under Grant 2017JM6006, in part by the Aviation Science Foundation of China under Grant 2016ZC53018, in part by the Fundamental Research Funds for Central Universities under Grant 3102017jghk02009, and in part Equipment Pre-research Foundation under Grant 2017-HT-XG.

Appendices

Appendix A: The proof of theorem 1

Given \(p({x_k},\mu ,\beta ,\lambda ,Z_1^k)\), use the mean field theory to yield that

$$\begin{aligned}&{\log q({x_k})q(\beta )}{ = {E_{{q_t}(\mu ,\lambda )}}\left\{ {\log p({x_k},\mu ,\beta ,\lambda ,Z_1^k)} \right\} + const}\\&{ = - \frac{1}{{\mathrm{2}}}{E_{{q_t}(\mu ,\lambda )}}\left[ {\lambda D({z_k} - {H_k}{x_k} - \mu ,{\bar{R}})} \right] + \log prio({x_k})}\\&{} { + \sum \limits _{j = 1}^{k - 1} {\left[ {\frac{1}{{\mathrm{2}}} \log \beta - \frac{\beta }{{\mathrm{2}}}{E_{{q_t}(\mu ,\lambda )}}\left[ {\lambda D({z_j} - {H_j}{{{\bar{x}}}_j} - \mu ,{\bar{R}}_j^*)} \right] } \right] } }\\&{} { + ({a_0} - 1)\log \beta - \beta {b_0} + const} \end{aligned}$$

where for the brevity u is omitted in our derivation the filter’s structure and will be brought back into the final result. We have

$$\begin{aligned}&{E_{{q_t}(\mu ,\lambda )}}\left[ {\lambda D({z_k} - {H_k}{x_k} - \mu ,{\bar{R}})} \right] \\&{\mathrm{= }}{E_{{q_t}(\mu ,\lambda )}}{\mathrm{[}}\lambda {\mathrm{]}}D({z_k} - {H_k}{x_k} - {{{\hat{\mu }} }_t},{\bar{R}}) + const\\&{E_{{q_t}(\mu ,\lambda )}}\left[ {\lambda D({z_j} - {H_j}{{{\bar{x}}}_j} - \mu ,{\bar{R}}_j^*)} \right] \\&{\mathrm{= }}{E_{{q_t}(\mu ,\lambda )}}{\mathrm{[}}\lambda {\mathrm{]}}D({z_j} - {H_j}{{{\bar{x}}}_j} - {{{\hat{\mu }} }_t},{\bar{R}}_j^*) + Tr[{\bar{R}}_j^*\hat{M}_t^{ - 1}]\\&{E_{{q_t}(\mu ,\lambda )}}[\lambda ]={E_{{q_t}(\lambda )}}[\lambda ]=\frac{{\hat{c}}_t}{{\hat{d}}_t}. \end{aligned}$$

For the following deduction, we define

$$\begin{aligned}&prio({z_k})\\&\buildrel \varDelta \over = \int {N{\mathrm{(}}{z_k}{\mathrm{|}}{H_k}{x_k}{\mathrm{+ }}{{{\hat{\mu }} }_t},[\frac{{\hat{c}}_t}{{\hat{d}}_t} {\bar{R}}]^{-1} {\mathrm{)}}} N({x_j}|{{{\bar{x}}}_k},{\bar{P}}_k^{ - 1})d{x_k}\\&=N{\mathrm{(}}{z_k}{\mathrm{|}}{H_k}{{{\bar{x}}}_k}{\mathrm{+ }}{{{\hat{\mu }} }_t},{H_k}{\bar{P}}_k^{ - 1}H_k^T + [\frac{{\hat{c}}_t}{{\hat{d}}_t} \bar{R}]^{-1} {\mathrm{)}}. \end{aligned}$$

Obviously, \(prio({z_k})\) is a constant independent of state. By rewriting \({\log q({x_k})q(\beta )}\) with \(prio({z_k})\) , we have

$$\begin{aligned}&\log q({x_k})q(\beta ) \\&= \log \frac{{N{\mathrm{(}}{z_k}{\mathrm{|}}{H_k}{x_k}{\mathrm{+ }}{{{\hat{\mu }} }_t},[\frac{{\hat{c}}_t}{{\hat{d}}_t} {\bar{R}}]^{-1} {\mathrm{)}}N({x_j}|{{{\bar{x}}}_k},{\bar{P}}_k^{ - 1})}}{{prio({z_k})}}\\&\quad + ({a_0}{\mathrm{+ }}\frac{{k - 1}}{{\mathrm{2}}} - 1)\log \beta + const\\&\quad - \beta \left( {{b_0} + \frac{1}{{\mathrm{2}}}\sum \limits _{j = 1}^{k - 1} {\left[ {{E_{{q_t}(\mu ,\lambda )}}\left[ {\lambda D({z_j} - {H_j}{{{\bar{x}}}_j} - \mu ,\bar{R}_j^*)} \right] } \right] } } \right) . \end{aligned}$$

Then, by re-organizing \(\log q({x_k})q(\beta )\), (30)-(33) are obtained.

Appendix B: The proof of theorem 2

According to the mean field theory, from \(p({x_k},\mu ,\beta ,\lambda , Z_1^k)\) it is easy to obtain

$$\begin{aligned}&\log q(\mu ,\lambda ) \\&= {E_{{q_{t + 1}}({x_k}){q_{t + 1}}(\beta )}}\left\{ {\log p({x_k},\mu ,\beta ,\lambda ,Z_1^k)} \right\} + const\\&= \frac{1}{{\mathrm{2}}}\log \lambda - \frac{\lambda }{{\mathrm{2}}}{E_{{q_{t + 1}}({x_k})}}\left[ {D({z_k} - {H_k}{x_k} - \mu ,{\bar{R}})} \right] \\&{} + \sum \limits _{j = 1}^{k - 1} {\left\{ {\frac{1}{{\mathrm{2}}}\log \lambda - \frac{{\lambda {E_{{q_{t + 1}}(\beta )}}[\beta ]}}{{\mathrm{2}}}D({z_j} - {H_j}{{{\bar{x}}}_j} - \mu ,{\bar{R}}_j^*)} \right\} } \\&{}+ \log N{\mathrm{(}}\mu {\mathrm{|}}{\mu _{\mathrm{0}}},{\lambda ^{ - 1}}{ M}_{\mathrm{0}}^{ - 1}{\mathrm{)}} + ({c_0} - 1)\log \lambda - \lambda {d_0}{\mathrm{+ const}} \end{aligned}$$

where

$$\begin{aligned}&{E_{{q_{t + 1}}({x_k})}}\left[ {D({z_k} - {H_k}{x_k} - \mu ,{\bar{R}})} \right] \\&{\mathrm{= }}D({z_k} - {H_k}{\hat{x}}_k^{t + 1} - \mu ,{\bar{R}}){\mathrm{+ Tr}}[{H_k}{(P_k^{t + 1}{\mathrm{)}}^{ - 1}}H_k^T{\bar{R}}]\\&{{ {E_{{q_{t + 1}}(\beta )}}[\beta ]}}=\frac{{\bar{a}}_{t+1}}{\bar{b}_{t_+1}}. \end{aligned}$$

For the following deduction, we define

$$\begin{aligned}&{p({{{\bar{Z}}}_k})}\\&{ = \int {N({{{\bar{Z}}}_k}{\mathrm{|}}(e \otimes {I_m})\mu ,{\lambda ^{ - 1}}{\bar{R}}_k^{ - 1})N(\mu {\mathrm{|}}\mu _0,{\lambda ^{ - 1}}{M}_{\mathrm{0}}^{ - 1})d\mu } }\\&{ = N\left( {{{{\bar{Z}}}_k}{\mathrm{|}}(e \otimes {I_m}){\mu _{\mathrm{0}}},\lambda ^{ - 1} B^{-1}} \right) } \end{aligned}$$

where \(B = {\left( {[e \otimes {I_m}]{M}_{\mathrm{0}}^{ - 1}{{[e \otimes {I_m}]}^T} + {\bar{R}}_k^{ - 1}} \right) ^{ - 1}}\). Obviously, \(p({\bar{Z}_k})\) is independent of states and decomposed as

$$\begin{aligned} {\log p({{{\bar{Z}}}_k})} =&\frac{1}{{\mathrm{2}}}\log \lambda - \frac{\lambda }{{\mathrm{2}}}D\left[ {{{{\bar{Z}}}_k} - (e \otimes {I_m}){\mu _{\mathrm{0}}}, B } \right] \\&+ const \end{aligned}$$

Then, rewriting \({\log q(\mu ,\lambda )}\) with \({\log p({{\bar{Z}}_k})}\), we have

$$\begin{aligned}&{\log q(\mu ,\lambda )}\\&{ = {E_{{q_{t + 1}}({x_k}){q_{t + 1}}(\beta )}}\left\{ {\log p({x_k},\mu ,\beta ,\lambda ,Z_1^k)} \right\} + const}\\&{ = \log \frac{{N({{{\bar{Z}}}_k}{\mathrm{|}}(e \otimes {I_m})\mu ,{\lambda ^{ - 1}}{\bar{R}}_k^{ - 1})N(\mu {\mathrm{|}}{\mu _{\mathrm{0}}},{{(\lambda )}^{ - 1}}{M}_{\mathrm{0}}^{ - 1})}}{{p({{{\bar{Z}}}_k})}}}\\&+ ({c_0}{\mathrm{+ }}\frac{1}{{\mathrm{2}}} - 1)\log \lambda - \lambda \left( {{d_0} + \frac{1}{{\mathrm{2}}}{\mathrm{Tr}}\left[ {{H_k}{{(P_k^{t + 1}{\mathrm{)}}}^{ - 1}}H_k^T{\bar{R}}} \right] } \right) \\&{ - \frac{\lambda }{{\mathrm{2}}}D\left[ {{{{\bar{Z}}}_k} - (e \otimes {I_m}){\mu _{\mathrm{0}}}, B } \right] {\mathrm{+ const}}}. \end{aligned}$$

Then, by re-organizing \({\log q(\mu ,\lambda )}\), Theorem 2 is derived.

$$\begin{aligned}&{L_{M - ELBO}}(q)=\int q_t({x_k},\mu ,\lambda ,\beta )\nonumber \\&\quad \ln \frac{{p({x_k},\mu ,\lambda ,\beta ,Z_1^k)}}{{q_t({x_k},\mu ,\lambda ,\beta )}}d\left\{ {{x_k},\mu ,\lambda ,\beta } \right\} \nonumber \\&\quad = {E_{}}\left[ {p({z_k}|{x_k},\mu ,\lambda )} \right] + {E_{}}\left[ {prio({x_k})} \right] \nonumber \\&\qquad + \sum \limits _{J = 1}^{k - 1} {{E_{}}\left[ {p({z_j}|\mu ,\lambda ,\beta )} \right] } \nonumber \\&\quad {}+ {E_{}}\left[ {p(\mu |\lambda )} \right] + {E_{}}\left[ {p(\lambda )} \right] + {E_{}}\left[ {p(\beta )} \right] - {E_{}}\left[ {q_t({x_k})} \right] \nonumber \\&\qquad - {E_{}}\left[ {q_t(\mu |\lambda )} \right] - {E_{}}\left[ {q_t(\lambda )} \right] - {E_{}}\left[ {q_t(\beta )} \right] \end{aligned}$$
(82)
$$\begin{aligned}&{E_{}}\left[ {p({z_k}|{x_k},\mu ,\lambda )} \right] = - \frac{{{D_m}}}{2}\log 2\pi \nonumber \\&\quad + \frac{{{D_m}}}{2}{\lambda _e} + \frac{1}{2}\log \left| {{R^{ - 1}}} \right| \nonumber \\&\quad - \frac{1}{2}\left( D\left( {{z_k} - {H_k}{\hat{x}}_k^t - {{{\hat{\mu }} }_t} - {u_k},{l_e}{R^{ - 1}}} \right) \right. \nonumber \\&\left. \qquad + Tr\left[ {\left( {{H_k}{{(P_k^t)}^{ - 1}}H_k^T{l_e} + {\hat{M}}_t^{ - 1}} \right) {R^{ - 1}}} \right] \right) \end{aligned}$$
(83)
$$\begin{aligned}&{E_{}}\left[ {prio({x_k})} \right] = - \frac{{{D_n}}}{2}\log 2\pi - \frac{1}{2}\log \left| {{\bar{P}}_j^{ - 1}} \right| \nonumber \\&\qquad - \frac{1}{2}\left( {D\left( {{\hat{x}}_k^t - {{{\bar{x}}}_k},{\bar{P}}_j^{}} \right) + Tr\left[ {{{(P_k^{t + 1})}^{ - 1}}{\bar{P}}_j^{}} \right] } \right) \end{aligned}$$
(84)
$$\begin{aligned}&{E_{}}\left[ {p({z_j}|\mu ,\lambda ,\beta )} \right] = - \frac{{{D_m}}}{2}\log 2\pi + \frac{{{D_m}}}{2}\left( {{\lambda _e} + {\beta _e}} \right) \nonumber \\&\qquad + \frac{1}{2}\log \left| {r{r_j}} \right| - \frac{1}{2}\left( D\left( {{z_j} - {H_j}{\hat{x}}_j^t - {{{\hat{\mu }} }_t} - {u_j},{l_e}{b_e}r{r_j}} \right) \right. \nonumber \\&\qquad \left. + Tr\left[ {{\hat{M}}_t^{ - 1}{b_e}r{r_j}} \right] \right) \end{aligned}$$
(85)
$$\begin{aligned}&{E_{}}\left[ {p(\mu |\lambda )} \right] = - \frac{{{D_m}}}{2}\log 2\pi + \frac{{{D_m}}}{2}{\lambda _e} + \frac{1}{2}\log \left( {\left| {{M_0}} \right| } \right) \nonumber \\&\qquad - \frac{{{l_e}}}{2}D\left( {{{{\hat{\mu }} }_t} - {\mu _0},{M_0}} \right) - \frac{1}{2}Tr\left[ {{\hat{M}}_t^{ - 1}M_0^{ - 1}} \right] \end{aligned}$$
(86)
$$\begin{aligned}&{E_{}}\left[ {p(\lambda )} \right] = - \log \varGamma \left( {{c_0}} \right) + {c_0}\log {d_0} + \left( {{c_0} - 1} \right) {\lambda _e} - {d_0}{l_e} \end{aligned}$$
(87)
$$\begin{aligned}&{E_{}}\left[ {p(\beta )} \right] = - \log \varGamma \left( {{a_0}} \right) + {a_0}\log {b_0} + \left( {{a_0} - 1} \right) {\beta _e} - {b_0}{b_e} \end{aligned}$$
(88)
$$\begin{aligned}&{E_{}}\left[ {q_t({x_k})} \right] = - \frac{{{D_n}}}{2}\log 2\pi - \frac{1}{2}\log \left| {{{(P_k^t)}^{ - 1}}} \right| - \frac{{{D_n}}}{2} \end{aligned}$$
(89)
$$\begin{aligned}&{E_{}}\left[ {q_t(\mu |\lambda )} \right] = - \frac{{{D_m}}}{2}\log 2\pi + \frac{{{D_m}}}{2}{\lambda _e} + \frac{1}{2}\log \left( {\left| {{\hat{M}}_t^{}} \right| } \right) - \frac{{{D_m}}}{2} \end{aligned}$$
(90)
$$\begin{aligned}&{E_{}}\left[ {q_t(\lambda )} \right] = - \log \varGamma \left( {{{{\hat{c}}}_t}} \right) + {{{\hat{c}}}_t}\log {{{\hat{d}}}_t} + \left( {{{{\hat{c}}}_t} - 1} \right) {\lambda _e} - {{{\hat{d}}}_t}{l_e} \end{aligned}$$
(91)
$$\begin{aligned}&{E_{}}\left[ {q_t(\beta )} \right] = - \log \varGamma \left( {{{{\hat{a}}}_t}} \right) + {{{\hat{a}}}_t}\log {{{\hat{b}}}_t} + \left( {{{{\hat{a}}}_t} - 1} \right) {\beta _e} - {{{\hat{b}}}_t}{b_e} \end{aligned}$$
(92)

Appendix C: Calculation of M-ELBO

Based on (42), M-ELBO is expressed as (82). Then, the calculations of expectations in (82) are given by (83)–(92). For the convenience of expression, the subscript for expectation computation is omitted.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Cui, H., Li, T. et al. A novel nonlinear filter through constructing the parametric Gaussian regression process. Nonlinear Dyn 105, 579–602 (2021). https://doi.org/10.1007/s11071-021-06626-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11071-021-06626-6

Keywords

Navigation