Skip to main content

Informative Ensemble Kalman Learning for Neural Structure

  • 654 Accesses

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 12312)

Abstract

We characterize the stochastic dynamics of Neural Learning to develop Informative Ensemble Kalman Learning. Specifically, an adaptive Ensemble Kalman Filter replaces backpropagation to quantify uncertainty and maximize information gain during Learning. Demonstrating competitive performance on standard datasets, we show Structure Learning using the Informative Ensemble Kalman Learner quickly recovers the dynamical equations of the Lorenz-63 system ab initio from data. Results indicate that extending DDDAS key informative paradigm to optimize Learning Systems is promising.

Keywords

  • Deep learning
  • Ensemble Kalman Filter
  • Informative learning
  • Systems dynamics and optimization

Support from ONR grant N00014-19-1-2273 and VTSIX INC are gratefully acknowledged.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-61725-7_23
  • Chapter length: 9 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   79.99
Price excludes VAT (USA)
  • ISBN: 978-3-030-61725-7
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   99.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.

Notes

  1. 1.

    See course at http://essg.mit.edu/ml.

  2. 2.

    Perturbed observations are used here for simplicity. This is not strictly necessary.

  3. 3.

    Obtain code from https://github.com/sairavela/EnsembleTrain.git.

  4. 4.

    Code may be found at http://github.com/sairavela/LorenzStructureLearn.

References

  1. Blasch, E., Ravela, S., Aved, A. (eds.): Handbook of Dynamic Data Driven Applications Systems. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-95504-9

    CrossRef  Google Scholar 

  2. Bryson, A., Ho, Y.C.: Applied Optimal Control. Hemisphere Publishing Corporation (1975)

    Google Scholar 

  3. Evensen, G.: The ensemble kalman filter: theoretical formulation and practical implementation. Ocean Dyn. 53, 343–367 (2003)

    CrossRef  Google Scholar 

  4. Harrison Jr., D., Rubinfeld, D.L.: Hedonic housing prices and the demand for clean air. J. Environ. Econ. Manage. 5(1), 81–102 (1978)

    MATH  CrossRef  Google Scholar 

  5. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    CrossRef  Google Scholar 

  6. Lorenz, E.N.: Deterministic nonperiodic flow. J. Atmospheric Sci. 20(2), 130–141 (1963)

    MathSciNet  MATH  CrossRef  Google Scholar 

  7. Patel, V.: Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning (2015). https://doi.org/10.1137/15M1048239

  8. Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561 (2017)

  9. Ravela, S., McLaughlin, D.: Fast ensemble smoothing. Ocean Dyn. 57(2), 123–134 (2007)

    CrossRef  Google Scholar 

  10. Rumelhart, D., Hinton, G., Williams, R.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)

    MATH  CrossRef  Google Scholar 

  11. Shwartz-Ziv, R., Tishby, N.: Opening the black box of deep neural networks via information (2017)

    Google Scholar 

  12. Tagade, P., Ravela, S.: A quadratic information measure for data assimilation. In: IEEE American Control Conference, pp. 598–603, Portland, USA (2014). https://doi.org/10.1109/ACC.2014.6859127

  13. Tran, T., Do, T.T., Reid, I., Carneiro, G.: Bayesian generative active deep learning (2019)

    Google Scholar 

  14. Trautner, M., Margolis, G., Ravela, S.: Informative Neural Ensemble Kalman Learning (2020). arXiv:2008.09915, http://arxiv.org/abs/2008.09915

  15. Trautner, M., Ravela, S.: Neural integration of continuous dynamics (2019). arXiv:1911.10309, http://arxiv.org/abs/1911.10309

  16. Wang, H., Yeung, D.Y.: Towards bayesian deep learning: a survey (2016). arxiv:1604.01662, http://arxiv.org/abs/1604.01662

  17. Welling, M., Teh, Y.W.: Bayesian learning via stochastic gradient langevin dynamics. In: ICML (2011)

    Google Scholar 

  18. Zhang, Q., Li, B., Wu, Y.: Evolutionary structure optimization of convolutional neural networks for deployment on resource limited systems. In: Huang, D.-S., Jo, K.-H., Zhang, X.-L. (eds.) ICIC 2018. LNCS, vol. 10955, pp. 742–753. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-95933-7_82

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sai Ravela .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Trautner, M., Margolis, G., Ravela, S. (2020). Informative Ensemble Kalman Learning for Neural Structure. In: Darema, F., Blasch, E., Ravela, S., Aved, A. (eds) Dynamic Data Driven Applications Systems. DDDAS 2020. Lecture Notes in Computer Science(), vol 12312. Springer, Cham. https://doi.org/10.1007/978-3-030-61725-7_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61725-7_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61724-0

  • Online ISBN: 978-3-030-61725-7

  • eBook Packages: Computer ScienceComputer Science (R0)