Advertisement

Mise en abyme with Artificial Intelligence: How to Predict the Accuracy of NN, Applied to Hyper-parameter Tuning

  • Giorgia Franchini
  • Mathilde GalinierEmail author
  • Micaela Verucchi
Conference paper
Part of the Proceedings of the International Neural Networks Society book series (INNS, volume 1)

Abstract

In the context of deep learning, the costliest phase from a computational point of view is the full training of the learning algorithm. However, this process is to be used a significant number of times during the design of a new artificial neural network, leading therefore to extremely expensive operations. Here, we propose a low-cost strategy to predict the accuracy of the algorithm, based only on its initial behaviour. To do so, we train the network of interest up to convergence several times, modifying its characteristics at each training. The initial and final accuracies observed during this beforehand process are stored in a database. We then make use of both curve fitting and Support Vector Machines techniques, the latter being trained on the created database, to predict the accuracy of the network, given its accuracy on the primary iterations of its learning. This approach can be of particular interest when the space of the characteristics of the network is notably large or when its full training is highly time-consuming. The results we obtained are promising and encouraged us to apply this strategy to a topical issue: hyper-parameter optimisation (HO). In particular, we focused on the HO of a convolutional neural network for the classification of the databases MNIST and CIFAR-10. By using our method of prediction, and an algorithm implemented by us for a probabilistic exploration of the hyper-parameter space, we were able to find the hyper-parameter settings corresponding to the optimal accuracies already known in literature, at a quite low-cost.

Keywords

Machine learning Support Vector Machines Curve fitting Artificial neural network Hyper-parameter optimisation 

Notes

Acknowledgements

The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the CLASS Project (https://class-project.eu/), grant agreement n 780622.

This work was also partially supported by INdAM-GNCS (Research Projects 2018). Furthermore, it was partially supported by INdAM Doctoral Programme in Mathematics and/or Applications Cofunded by Marie Sklodowska-Curie Actions (INdAM-DP-COFUND-2015) whose grant number is 713485.

References

  1. 1.
    Hutter, F., Hoos, H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: LION-5 2011. Extended version as UBC Technical report TR-2010-10 (2011)Google Scholar
  2. 2.
    Bergstra, J.S., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: NIPS (2011)Google Scholar
  3. 3.
    Shahriari, B., Swersky, K., Wang, Z., Adams, R.P., de Freitas, N.: Taking the human out of the loop: a review of bayesian optimization. Proc. IEEE 104(1), 148–175 (2016)CrossRefGoogle Scholar
  4. 4.
    Mockus, J., Tiesis, V., Zilinskas, A.: The application of Bayesian methods for seeking the extremum. In: Dixon, L.C.W., Szego, G.P. (eds.) Towards Global Optimization. volume 2, pp. 117–129. North Holland, New York (1978)Google Scholar
  5. 5.
    Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13(1), 281–305 (2012)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: International Conference on Learning Representations, Toulon, France, pp. 1–16 (2017)Google Scholar
  7. 7.
    Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. In: International Conference on Learning Representations, pp. 1–18 (2017)Google Scholar
  8. 8.
    Zhong, Z., Yan, J., Wei, W., Shao, J., Liu, C.-L.: Practical block-wise neural network architecture generation. In: Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah, USA (2018). arXiv preprint:1708.05552
  9. 9.
    Cai, H., Chen, T., Zhang, W., Yu, Y., Wang, J.: Efficient architecture search by network transformation. In: AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, pp. 2787–2794 (2018)Google Scholar
  10. 10.
    Chapelle, O., Vapnik, V.: Model selection for support vector machines. In: Advances in Neural Information Processing Systems, vol. 12 (1999)Google Scholar
  11. 11.
    Arlinghaus, S.L.: PHB Practical Handbook of Curve Fitting. CRC Press, Boca Raton (1994)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Giorgia Franchini
    • 1
  • Mathilde Galinier
    • 1
    • 2
    Email author
  • Micaela Verucchi
    • 1
  1. 1.Università degli studi di Modena e Reggio EmiliaModenaItaly
  2. 2.Marie Sklodowska-Curie fellow of the Istituto Nazionale di Alta MatematicaRomeItaly

Personalised recommendations