Advertisement

On Capacity with Incremental Learning by Simplified Chaotic Neural Network

  • Toshinori Deguchi
  • Naohiro Ishii
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11324)

Abstract

Chaotic behaviors are often shown in the biological brains. They are related strongly to the memory storage and learning in the chaotic neural networks. The incremental learning is a method to compose an associative memory using a chaotic neural network and provides larger capacity than the Hebbian rule in compensation for amount of computation. In the former works, patterns were generated randomly to have plus 1 in half of elements and minus 1 in the others. When finely-tuned parameters were used, the network learned these pattern features, well. But, this result could be taken as an over-learning. Then, we proposed pattern generating methods to avoid over-learning and tested the patterns, in which the ratio of plus 1 and minus 1 is different from 1 to 1. In this paper, our simulations investigate the capacity of the usual chaotic neural network and that of the simplified chaotic neural network with these patterns to ensure no over-learning.

Keywords

Chaotic neural network Capacity of network Incremental learning 

References

  1. 1.
    Freeman, W.J., Barrie, J.M.: Chaotic oscillations and the genesis of meaning in cerebral cortex. In: Buzsáki, G., Llinás, R., Singer, W., Berthoz, A., Christen, Y. (eds.) Temporal Coding in the Brain. Research and Perspectives in Neurosciences, pp. 13–37. Springer, Heidelberg (1994).  https://doi.org/10.1007/978-3-642-85148-3_2Google Scholar
  2. 2.
    Babloyantz, A., Lourenco, C.: Brain chaos and computation. Int. J. Neural Syst. 7, 461–471 (1996)CrossRefGoogle Scholar
  3. 3.
    Crook, N.T., Dobbyn, C.H., Scheper, T.O.: Chaos as a desirable stable state of artificial neural networks. In: John, R., Birkenhead, R. (eds.) Advances in Soft Computing: Soft Computing Techniques and Applications, pp. 52–60. Physica-Verlag (2000)Google Scholar
  4. 4.
    Asakawa, S., Deguchi, T., Ishii, N.: On-demand learning in neural network. In: Proceedings of the ACIS 2nd International Conference on Software Engineering, Artificial Intelligence, Networking & Parallel/Distributed Computing, pp. 84–89 (2001)Google Scholar
  5. 5.
    Deguchi, T., Ishii, N.: On refractory parameter of chaotic neurons in incremental learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3214, pp. 103–109. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30133-2_14CrossRefGoogle Scholar
  6. 6.
    Watanabe, M., Aihara, K., Kondo, S.: Automatic learning in chaotic neural networks. In: Proceedings of 1994 IEEE Symposium on Emerging Technologies and Factory Automation, pp. 245–248 (1994)Google Scholar
  7. 7.
    Aihara, K., Tanabe, T., Toyoda, M.: Chaotic neural networks. Phys. Lett. A 144(6,7), 333–340 (1990)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Deguchi, T., Matsuno, K., Ishii, N.: On capacity of memory in chaotic neural networks with incremental learning. In: Lovrek, I., Howlett, Robert J., Jain, Lakhmi C. (eds.) KES 2008. LNCS (LNAI), vol. 5178, pp. 919–925. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-85565-1_114CrossRefGoogle Scholar
  9. 9.
    Deguchi, T., Matsuno, K., Kimura, T., Ishii, N.: Capacity of memory and error correction capability in chaotic neural networks with incremental learning. In: Lee, R., Hu, G., Miao, H. (eds.) Computer and Information Science 2009. Studies in Computational Intelligence, vol. 208, pp. 295–302. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-01209-9_27CrossRefGoogle Scholar
  10. 10.
    Matsuno, K., Deguchi, T., Ishii, N.: On influence of refractory parameter in incremental learning. In: Lee, R. (ed.) Computer and Information Science 2010. Studies in Computational Intelligence, vol. 317, pp. 13–21. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15405-8_2CrossRefGoogle Scholar
  11. 11.
    Deguchi, T., Ishii, N.: On memory capacity in incremental learning with appropriate refractoriness and weight increment. In: Proceedings of 1st ACIS/JNU International Conference on Computers, Networks, Systems, and Industrial Engineering, pp. 427–430 (2011)Google Scholar
  12. 12.
    Deguchi, T., Fukuta, J., Ishii, N.: On appropriate refractoriness and weight increment in incremental learning. In: Tomassini, M., Antonioni, A., Daolio, F., Buesser, P. (eds.) ICANNGA 2013. LNCS, vol. 7824, pp. 1–9. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-37213-1_1CrossRefGoogle Scholar
  13. 13.
    Deguchi, T., Takahashi, T., Ishii, N.: On simplification of chaotic neural network on incremental learning. In: 15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, pp. 1–4 (2014)Google Scholar
  14. 14.
    Deguchi, T., Takahashi, T., Ishii, N.: On temporal summation in chaotic neural network with incremental learning. Int. J. Softw. Innov. 2(4), 72–84 (2014)CrossRefGoogle Scholar
  15. 15.
    Deguchi, T., Takahashi, T., Ishii, N.: On acceleration of incremental learning in chaotic neural network. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2015. LNCS, vol. 9095, pp. 370–379. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-19222-2_31CrossRefGoogle Scholar
  16. 16.
    Adachi, M., Aihara, K., Kotani, M.: Nonlinear associative dynamics in a chaotic neural Networks. In: Proceedings of the 2nd International Conference on Fuzzy Logic & Neural Networks, pp. 947–950 (1992)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.National Institute of TechnologyGifu CollegeGifuJapan
  2. 2.Aichi Institute of TechnologyAichiJapan

Personalised recommendations