Skip to main content
Log in

Verification of piecewise deep neural networks: a star set approach with zonotope pre-filter

  • Original Article
  • Published:
Formal Aspects of Computing

Abstract

Verification has emerged as a means to provide formal guarantees on learning-based systems incorporating neural network before using them in safety-critical applications. This paper proposes a new verification approach for deep neural networks (DNNs) with piecewise linear activation functions using reachability analysis. The core of our approach is a collection of reachability algorithms using star sets (or shortly, stars), an effective symbolic representation of high-dimensional polytopes. The star-based reachability algorithms compute the output reachable sets of a network with a given input set before using them for verification. For a neural network with piecewise linear activation functions, our approach can construct both exact and over-approximate reachable sets of the neural network. To enhance the scalability of our approach, a star set is equipped with an outer-zonotope (a zonotope over-approximation of the star set) to quickly estimate the lower and upper bounds of an input set at a specific neuron to determine if splitting occurs at that neuron. This zonotope pre-filtering step reduces significantly the number of linear programming optimization problems that must be solved in the analysis, and leads to a reduction in computation time, which enhances the scalability of the star set approach. Our reachability algorithms are implemented in a software prototype called the neural network verification tool, and can be applied to problems analyzing the robustness of machine learning methods, such as safety and robustness verification of DNNs. Our experiments show that our approach can achieve runtimes twenty to 1400 times faster than Reluplex, a satisfiability modulo theory-based approach. Our star set approach is also less conservative than other recent zonotope and abstract domain approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Akintunde ME, Kevorchian A, Lomuscio A, Pirovano E (2019) Verification of RNN-based neural agent-environment systems. In: Proceedings of the 33th AAAI conference on artificial intelligence (AAAI19). Honolulu, HI, USA. AAAI Press (to appear)

  2. Akintunde M, Lomuscio A, Maganti L, Pirovano E (2018) Reachability analysis for neural agent-environment systems. In: Sixteenth international conference on principles of knowledge representation and reasoning

  3. Bak S, Duggirala PS (2017) Simulation-equivalent reachability of large linear systems with inputs. In: International conference on computer aided verification. Springer, pp 401–420

  4. Bojarski M, Del Testa D, Dworakowski D, Firner B, Flepp B, Goyal P, Jackel LD, Monfort M, Muller U, Zhang J et al (2016) End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316

  5. Bastani O, Ioannou Y, Lampropoulos L, Vytiniotis D, Nori A, Criminisi A (2016) Measuring neural net robustness with constraints. In: Advances in neural information processing systems, pp 2613–2621

  6. Bak S, Tran H-D, Hobbs K, Johnson T (2020) Improved geometric path enumeration for verifying ReLU neural networks. In: Proceedings of the 32nd international conference on computer aided verification. Springer

  7. Dutta S, Jha S, Sanakaranarayanan S, Tiwari A (2017) Output range analysis for deep neural networks. arXiv preprint arXiv:1709.09130

  8. Ehlers R (2017) Formal verification of piece-wise linear feed-forward neural networks. In: International symposium on automated technology for verification and analysis. Springer, pp 269–286

  9. Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M (2018) Ai 2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE symposium on security and privacy (SP)

  10. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572

  11. Hinton G, Deng L, Yu D, Dahl GE, Mohamed A, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97

  12. Heilweil R (2020) Tesla needs to fix its deadly Autopilot problem

  13. Huang X, Kwiatkowska M, Wang S, Wu M (2017) Safety verification of deep neural networks. In: International conference on computer aided verification. Springer, pp 3–29

  14. Julian KD, Kochenderfer MJ, Owen MP (2018) Deep neural network compression for aircraft collision avoidance systems. arXiv preprint arXiv:1810.04240

  15. Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ (2017) Reluplex: an efficient smt solver for verifying deep neural networks. In: International conference on computer aided verification. Springer, pp 97–117

  16. Kvasnica M, Grieder P, Baotić M, Morari M (2004) Multi-parametric toolbox (MPT). In: International workshop on hybrid systems: computation and control. Springer, pp 448–462

  17. Guy K, Derek AH, Duligur I, Kyle J, Christopher L, Rachel L, Parth S, Shantanu T, Haoze W, Aleksandar Z et al (2019) The marabou framework for verification and analysis of deep neural networks. In: International conference on computer aided verification. Springer, pp 443–452

  18. Kouvaros P, Lomuscio A (2018) Formal verification of cnn-based perception systems. arXiv preprint arXiv:1811.11373

  19. LeCun Y (1998) The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/

  20. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak Jeroen AWM, Van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–88

  21. Lomuscio A, Maganti L (2017) An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351

  22. Liu W, Wang Z, Liu X, Zeng N, Liu Y, Alsaadi FE (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26

  23. Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582

  24. Muoio D (2017) The self-driving Uber in the Arizona crash was hit crossing an intersection on yellowUber crashes

  25. Pulina L, Tacchella A (2010) An abstraction-refinement approach to verification of artificial neural networks. In: International conference on computer aided verification. Springer, pp 243–257

  26. Singh G, Gehr T, Mirman M, Püschel M, Vechev M (2018) Fast and effective robustness certification. In: Advances in neural information processing systems, pp 10825–10836

  27. Singh G, Gehr T, Püschel M, Vechev M (2019) An abstract domain for certifying neural networks. Proc ACM Programm Lang 3(POPL):41

  28. Tran H-D, Bak S, Xiang W, Johnson TT (2020) Verification of deep convolutional neural networks using imagestars. In: 32nd international conference on computer-aided verification (CAV). Springer

  29. Tran H-D, Musau P, Lopez DM, Yang X, Nguyen LV, Xiang W, Johnson TT (2019) Parallelizable reachability analysis algorithms for feed-forward neural networks. In: 7th international conference on formal methods in software engineering (FormaliSE2019),Montreal, Canada

  30. Tran H-D, Musau P, Lopez DM, Yang X, Nguyen LV, Xiang W, Johnson TT (2019) Star-based reachability analsysis for deep neural networks. In: 23rd international symposisum on formal methods (FM’19). Springer

  31. Tran H-D, Pal N, Musau P, Yang X, Hamilton NP, Lopez DM, Bak S, Johnson TT (2021) Robustness verification of semantic segmentation neural networks using relaxed reachability. In: Proceedings of the 33rd international conference on computeraided verification. Springer

  32. Tran H-D, Yang X, Lopez DM, Musau P, Nguyen LV, Xiang W, Bak S, Johnson TT (2020) NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: 32nd international conference on computer-aided verification (CAV)

  33. Wang S, Pei K, Whitehouse J, Yang J, Jana S (2018) Efficient formal safety analysis of neural networks. In: Advances in neural information processing systems, pp 6369–6379

  34. Wang S, Pei K, Whitehouse J, Yang J, Jana S (2018) Formal security analysis of neural networks using symbolic intervals. arXiv preprint arXiv:1804.10829

  35. Weng T-W, Zhang H, Chen H, Song Z, Hsieh C-J, Boning D, Dhillon IS, Daniel L (2018) Towards fast computation of certified robustness for relu networks. arXiv preprint arXiv:1804.09699

  36. Xiang W, Musau P, Wild AA, Lopez DM, Hamilton N, Yang X, Rosenfeld JA, Johnson TT (2018) Verification for machine learning, autonomy, and neural networks survey. CoRR arXiv:1810.01989

  37. Xiang W, Tran H-D, Johnson TT (2017) Reachable set computation and safety verification for neural networks with relu activations. arXiv preprint arXiv:1712.08163

  38. Xiang W, Tran H-D, Johnson TT (2018) Output reachable set estimation and verification formultilayer neural networks. IEEE Trans Neural Netw Learn Syst (99):1–7

  39. Xiang W, Tran H-D, Johnson TT (2019) Specification-guided safety verification for feedforward neural networks. AAAI Spring symposium on verification of neural networks

  40. Zhang H, Weng T-W, Chen P-Y, Hsieh C-J, Daniel L (2018) Efficient neural network robustness certification with general activation functions. In: Advances in neural information processing systems, pp 4944–4953

Download references

Acknowledgements

The material presented in this paper is based upon work supported by the Air Force Office of Scientific Research (AFOSR) through contract numbers FA9550-18-1-0122 and FA9550-19-1-0288, the Defense Advanced Research Projects Agency (DARPA) through contract number FA8750-18-C-0089, and the National Science Foundation (NSF) through Grant Number 1910017. The U.S. government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of AFOSR, DARPA, or NSF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hoang-Dung Tran.

Additional information

Annabelle McIver, Maurice ter Beek and Cliff Jones

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tran, HD., Pal, N., Lopez, D.M. et al. Verification of piecewise deep neural networks: a star set approach with zonotope pre-filter. Form Asp Comp 33, 519–545 (2021). https://doi.org/10.1007/s00165-021-00553-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00165-021-00553-4

Keywords

Navigation