Advertisement

Deep Markov Models for Data Assimilation in Chaotic Dynamical Systems

  • Calvin Janitra HalimEmail author
  • Kazuhiko Kawamoto
Conference paper
  • 225 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1128)

Abstract

This is an extension from a selected paper from JSAI2019. Recently, the use of deep learning in data assimilation has been gaining research attention. For instance, the time-series deep Markov model has been proposed along with an inference network trained using variational inference. However, the original proposal did not fully leverage the model ability for data assimilation. Therefore, we aim to evaluate the suitability of a deep Markov model and its inference network for a chaotic dynamical system, which is a common problem in data assimilation. We evaluate the model under various generative conditions. The results show that when information about part of the target model is known, the model has comparable performance to a smoothed unscented Kalman filter, even under the presence of process and observation noise.

Keywords

Data assimilation Chaotic system Deep Markov model Variational inference 

Notes

Acknowledgements

This work was supported by JSPS KAKENHI under Grant Number JP16K00231.

References

  1. 1.
    Wikle, C.K., Berliner, L.M.: A Bayesian tutorial for data assimilation. Phys. D: Nonlinear Phenom. 230, 1–16 (2007).  https://doi.org/10.1016/j.physd.2006.09.017MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Kalman, R.E.: A new approach to linear filtering and prediction problems. J. Basic Eng. 82(1), 35–45 (1960)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Evensen, G.: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res. Oceans 99(C5), 10143–10162 (1994)CrossRefGoogle Scholar
  4. 4.
    Groves, P.D.: Principles of GNSS, inertial, and multisensor integrated navigation systems (2nd edn.). IEEE Aerosp. Electron. Syst. Mag. 30(2), 26–27 (2015)CrossRefGoogle Scholar
  5. 5.
    Gu, S.S., Ghahramani, Z., Turner, R.E.: Neural adaptive sequential Monte Carlo. In: Advances in Neural Information Processing Systems, vol. 28, pp. 2629–2637. Curran Associates, Inc. (2015)Google Scholar
  6. 6.
    Kutschireiter, A., Surace, S.C., Sprekeler, H., Pfister, J.-P.: Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception. Sci. Rep. 7(1), 8722-1–8722-13 (2017)Google Scholar
  7. 7.
    Cintra, R.S., Campos Velho, H.F.: Data assimilation by artificial neural networks for an atmospheric general circulation model. In: Advanced Applications for Artificial Neural Networks, chap. 14 (2018)Google Scholar
  8. 8.
    Loh, K., Omrani, P.S., Linden, R.V.: Deep learning and data assimilation for real-time production prediction in natural gas wells. arXiv:1802.05141 [cs.LG] (2018)
  9. 9.
    Krishnan, R.G., Shalit, U., Sontag, D.: Structured inference networks for nonlinear state space model. In: AAAI (2017)Google Scholar
  10. 10.
    Jordan, M.I., Ghahramani, Z., Jaakkola, T.S., Saul, L.K.: An introduction to variational methods for graphical models. Mach. Learn. 37(2), 183–233 (1999)CrossRefGoogle Scholar
  11. 11.
    Cho, K., van Merriënboer, B., Gülçehre, Ç., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on EMNLP, pp. 1724–1734 (2014)Google Scholar
  12. 12.
    Lorenz, E.N.: Predictability: a problem partly solved. Semin. Predict. 1(1), 1–18 (1995)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Chiba UniversityChiba-shiJapan

Personalised recommendations