Advertisement

Regularization theory in the study of generalization ability of a biological neural network model

  • Aleksandra Świetlicka
Open Access
Article
  • 37 Downloads

Abstract

This paper focuses on the generalization ability of a dendritic neuron model (a model of a simple neural network). The considered model is an extension of the Hodgkin-Huxley model. The Markov kinetic schemes have been used in the mathematical description of the model, while the Lagrange multipliers method has been applied to train the model. The generalization ability of the model is studied using a method known from the regularization theory, in which a regularizer is added to the neural network error function. The regularizers in the form of the sum of squared weights of the model (the penalty function), a linear differential operator related to the input-output mapping (the Tikhonov functional), and the square norm of the network curvature are applied in the study. The influence of the regularizers on the training process and its results are illustrated with the problem of noise reduction in images of electronic components. Several metrics are used to compare results obtained for different regularizers.

Keywords

Kinetic model of neuron Markov kinetic schemes Lagrange multipliers Generalization ability Image processing Noise reduction 

Mathematics Subject Classification (2010)

68T05 

Notes

Acknowledgements

I would like to offer my special thanks to Krzysztof Kolanowski for taking Figs. 2 and 3(A1), and to Agata Jurkowlaniec for preparing the images so that they could be used to train the biological neural network model.

References

  1. 1.
    Avriel, M.: Nonlinear Programming: Analysis and Methods, 2nd edn. Dover Pubn Inc., USA (2003)zbMATHGoogle Scholar
  2. 2.
    Bishop, C.: Pattern Recognition and Machine Learning. Springer, New York (2006)zbMATHGoogle Scholar
  3. 3.
    Bower, J.M., Beeman, D.: The Book of GENESIS, Exploring Realistic Neural Models with the GEneral NEural SImulation System. Internet Edition (2003)Google Scholar
  4. 4.
    Cox, S.J., Griffith, B.E.: A fast, fully implicit backward Euler solver for dendritic neurons. Technical report, Rice University Department of Computational and Applied Mathematics (2000)Google Scholar
  5. 5.
    Destexhe, A., Mainen, Z.F., Sejnowski, T.J.: Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism. J. Comput. Neurosci. 3(1), 195–230 (1994)CrossRefGoogle Scholar
  6. 6.
    Galicki, M., Leistritz, L., Zwick, E.B., Witte, H.: Improving generalization capabilities of dynamic neural networks. Neural Comput. 16, 1253–1282 (2004)CrossRefzbMATHGoogle Scholar
  7. 7.
    Haykin, S.O.: Neural Networks: A Comprehensive Foundation, 2nd edn. Pearson Education, London (1998)zbMATHGoogle Scholar
  8. 8.
    Hodgkin, A.L., Huxley, A.F.: A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500–544 (1952)CrossRefGoogle Scholar
  9. 9.
    Koch, C.: Biophysics of Computation: Information Processing in Single Neurons. Chapter VI. Oxford University Press, New York (1999)Google Scholar
  10. 10.
    Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Advances in Neural Information Processing Systems 4, pp. 950–957. Morgan Kaufmann (1992)Google Scholar
  11. 11.
    Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2000)zbMATHGoogle Scholar
  12. 12.
    Schneidman, E., Segev, I., Tishby, N.: Information capacity and robustness of stochastic neuron models. In: Advances in Neural Information Processing, vol. 12 (2000)Google Scholar
  13. 13.
    Świetlicka, A.: Trained stochastic model of biological neural network used in image processing task. Appl. Math. Comput. 267, 716–726 (2015)MathSciNetGoogle Scholar
  14. 14.
    Świetlicka, A., Gugała, K., Jurkowlaniec, A., Śniatała, P., Rybarczyk, A.: The stochastic, Markovian, Hodgkin-Huxley type of mathematical model of the neuron. Neural Network World 25(1), 219–239 (2015)CrossRefGoogle Scholar
  15. 15.
    Świetlicka, A., Gugała, K., Pedrycz, W., Rybarczyk, A.: Development of the deterministic and stochastic Markovian model of a dendritic neuron. Bioprocess Biosyst. Eng. 37, 201–216 (2017)Google Scholar
  16. 16.
    Świetlicka, A., Kolanowski, K., Kapela, R., Galicki, M., Rybarczyk, A.: Investigation of generalization ability of a trained stochastic kinetic model of neuron. Appl. Math. Comput. 319, 115–124 (2018)MathSciNetGoogle Scholar

Copyright information

© The Author(s) 2019

OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Institute of Automation and RoboticsPoznan University of TechnologyPoznańPoland

Personalised recommendations