Advertisement

Verification and Repair of Neural Networks: A Progress Report on Convolutional Models

  • Dario Guidotti
  • Francesco Leofante
  • Luca Pulina
  • Armando TacchellaEmail author
Conference paper
  • 246 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11946)

Abstract

Recent public calls for the development of explainable and verifiable AI led to a growing interest in formal verification and repair of machine-learned models. Despite the impressive progress that the learning community has made, models such as deep neural networks remain vulnerable to adversarial attacks, and their sheer size represents a major obstacle to formal analysis and implementation. In this paper we present our current efforts to tackle repair of deep convolutional neural networks using ideas borrowed from Transfer Learning. With results obtained on popular MNIST and CIFAR10 datasets, we show that models of deep convolutional neural networks can be transformed into simpler ones preserving their accuracy, and we discuss how formal repair through convex programming techniques could benefit from this process.

Keywords

Transfer Learning Network repair Convex optimization 

Notes

Acknowledgments

The research of Francesco Leofante and Luca Pulina has been funded by the Sardinian Regional Project PRO-COMFORT (POR FESR Sardegna 2014-2020 - Asse 1, Azione 1.1.3). The research of Luca Pulina has been also partially funded by the Sardinian Regional Projects PROSSIMO (POR FESR Sardegna 2014/20-ASSE I), SMART_UzER (POR FESR Sardegna 2014-2020, Asse I, Azione 1.2.2), and by the University of Sassari (research fund “Metodi per la verifica di reti neurali”).

References

  1. 1.
    Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68167-2_19CrossRefGoogle Scholar
  2. 2.
    Frankle, J., Carbin, M.: The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018)
  3. 3.
    Frankle, J., Carbin, M.: The lottery ticket hypothesis: training pruned neural networks. CoRR abs/1803.03635 (2018). http://arxiv.org/abs/1803.03635
  4. 4.
    Gilmer, J., Adams, R.P., Goodfellow, I., Andersen, D., Dahl, G.E.: Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732 (2018)
  5. 5.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)Google Scholar
  6. 6.
    Guidotti, D., Leofante, F., Castellini, C., Tacchella, A.: Repairing learned controllers with convex optimization: a case study. In: Rousseau, L.-M., Stergiou, K. (eds.) CPAIOR 2019. LNCS, vol. 11494, pp. 364–373. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-19212-9_24CrossRefGoogle Scholar
  7. 7.
    Guidotti, D., Leofante, F., Tacchella, A., Castellini, C.: Improving reliability of myocontrol using formal verification. IEEE TNSRE 27(4), 564–571 (2019)Google Scholar
  8. 8.
    Huang, X., et al.: Safety and trustworthiness of deep neural networks: a survey. arXiv preprint arXiv:1812.08342 (2018)
  9. 9.
    Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-63387-9_5CrossRefGoogle Scholar
  10. 10.
    LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  11. 11.
    Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E.: A survey of deep neural network architectures and their applications. Neurocomputing 234, 11–26 (2017)CrossRefGoogle Scholar
  12. 12.
    Nielsen, M.A.: Neural Networks and Deep Learning, vol. 25. Determination Press, San Francisco (2015)Google Scholar
  13. 13.
    Paszke, A., et al.: Automatic differentiation in pytorch (2017)Google Scholar
  14. 14.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-14295-6_24CrossRefGoogle Scholar
  16. 16.
    Rauber, J., Brendel, W., Bethge, M.: Foolbox: a Python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131 (2017). http://arxiv.org/abs/1707.04131
  17. 17.
    Schwarz, M., Schulz, H., Behnke, S.: RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1329–1335. IEEE (2015)Google Scholar
  18. 18.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  19. 19.
    Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)Google Scholar
  20. 20.
    Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: closing the gap to human-level performance in face verification. In: CVPR, pp. 1701–1708 (2014)Google Scholar
  21. 21.
    Tang, Y.: Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239 (2013)
  22. 22.
    Torrey, L., Shavlik, J.: Transfer learning. In: Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, pp. 242–264. IGI Global (2010)Google Scholar
  23. 23.
    Wong, E., Schmidt, F., Metzen, J.H., Kolter, J.Z.: Scaling provable adversarial defenses. In: Advances in Neural Information Processing Systems, pp. 8400–8409 (2018)Google Scholar
  24. 24.
    Yu, D., Hinton, G.E., Morgan, N., Chien, J., Sagayama, S.: Introduction to the special section on deep learning for speech and language processing. IEEE Trans. Audio Speech Lang. Process. 20(1), 4–6 (2012)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Dario Guidotti
    • 1
  • Francesco Leofante
    • 1
    • 2
  • Luca Pulina
    • 3
  • Armando Tacchella
    • 1
    Email author
  1. 1.University of GenoaGenoaItaly
  2. 2.RWTH Aachen UniversityAachenGermany
  3. 3.University of SassariSassariItaly

Personalised recommendations