Advertisement

Artificial Neural Networks: The Missing Link Between Curiosity and Accuracy

  • Giorgia FranchiniEmail author
  • Paolo Burgio
  • Luca Zanni
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 941)

Abstract

Artificial Neural Networks, as the name itself suggests, are biologically inspired algorithms designed to simulate the way in which the human brain processes information. Like neurons, which consist of a cell nucleus that receives input from other neurons through a web of input terminals, an Artificial Neural Network includes hundreds of single units, artificial neurons or processing elements, connected with coefficients (weights), and are organized in layers. The power of neural computations comes from connecting neurons in a network: in fact, in an Artificial Neural Network it is possible to manage a different number of information at the same time. What is not fully understood is which is the most efficient way to train an Artificial Neural Network, and in particular what is the best mini-batch size for maximize accuracy while minimizing training time. The idea that will be developed in this study has its roots in the biological world, that inspired the creation of Artificial Neural Network in the first place.

Humans have altered the face of the world through extraordinary adaptive and technological advances: those changes were made possible by our cognitive structure, particularly the ability to reasoning and build causal models of external events. This dynamism is made possible by a high degree of curiosity. In the biological world, and especially in human beings, curiosity arises from the constant search of knowledge and information: behaviours that support the information sampling mechanism range from the very small (initial mini-batch size) to the very elaborate sustained (increasing mini-batch size).

The goal of this project is to train an Artificial Neural Network by increasing dynamically, in an adaptive manner (with validation set), the mini-batch size; our hypothesis is that this training method will be more efficient (in terms of time and costs) compared to the ones implemented so far.

Keywords

Artificial Neural Network Stochastic gradient Mini-batch size increasing 

Notes

Acknowledgements

The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the CLASS Project (https://class-project.eu/), grant agreement n 780622.

This work was partially supported also by INdAM-GNCS (Research Projects 2018).

References

  1. 1.
    Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. arXiv:1606.04838
  2. 2.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)
  3. 3.
    Gottlieb, J., Oudeyer, P.-Y., Lopes, M., Baranes, A.: Information seeking, curiosity and attention: computational and neural mechanisms. Trends Cogn. Sci. 17(11), 585–593 (2013). NIH Public AccessCrossRefGoogle Scholar
  4. 4.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436 (2015)CrossRefGoogle Scholar
  5. 5.
    Lehman, J., Stanley, K.O.: Abandoning objectives: evolution through the search for novelty alone. Evol. Comput. 19, 189–223 (2011)CrossRefGoogle Scholar
  6. 6.
    Smith, S.L., Kindermans, P.-J., Ying, C., Le, Q.V.: Don’t decay the learning rate, increase the batch size. In: ICLR 2018 Conference (2018)Google Scholar
  7. 7.
    Serra, R., Zanarini, G. (eds.): Complex System and Cognitive Process. Springer, Heidelberg (1990)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of Modena and Reggio EmiliaModenaItaly

Personalised recommendations