Output Range Analysis for Deep Feedforward Neural Networks

  • Souradeep Dutta
  • Susmit Jha
  • Sriram Sankaranarayanan
  • Ashish Tiwari
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10811)

Abstract

Given a neural network (NN) and a set of possible inputs to the network described by polyhedral constraints, we aim to compute a safe over-approximation of the set of possible output values. This operation is a fundamental primitive enabling the formal analysis of neural networks that are extensively used in a variety of machine learning tasks such as perception and control of autonomous systems. Increasingly, they are deployed in high-assurance applications, leading to a compelling use case for formal verification approaches. In this paper, we present an efficient range estimation algorithm that iterates between an expensive global combinatorial search using mixed-integer linear programming problems, and a relatively inexpensive local optimization that repeatedly seeks a local optimum of the function represented by the NN. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate applications of our approach to computing flowpipes for neural network-based feedback controllers. We show that the use of local search in conjunction with mixed-integer linear programming solvers effectively reduces the combinatorial search over possible combinations of active neurons in the network by pruning away suboptimal nodes.

Notes

Acknowledgments

We gratefully acknowledge inputs from Sergio Mover and Marco Gario for their helpful comments on an earlier version of this paper. This work was funded in part by the US National Science Foundation (NSF) under award numbers CNS-1646556, CNS-1750009, CNS-1740079 and US ARL Cooperative Agreement W911NF-17-2-0196. All opinions expressed are those of the authors and not necessarily of the US NSF or ARL.

References

  1. 1.
    Baier, C., Katoen, J.-P.: Principles of Model Checking. MIT Press, Cambridge (2008)Google Scholar
  2. 2.
    Barrett, C.W., Sebastiani, R., Seshia, S.A., Tinelli, C.: Satisfiability modulo theories. Handb. Satisfiability 185, 825–885 (2009)Google Scholar
  3. 3.
    Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Advances in Neural Information Processing Systems, pp. 2613–2621 (2016)Google Scholar
  4. 4.
    Bixby, R.E.: A brief history of linear and mixed-integer programming computation. Documenta Mathematica 107–121 (2012)Google Scholar
  5. 5.
    Bunel, R., Turkaslan, I., Torr, P.H.S., Kohli, P., Kumar, M.P.: Piecewise linear neural network verification: a comparative study. CoRR, abs/1711.00455 (2017)Google Scholar
  6. 6.
    Dutta, S.: Sherlock: an output range analysis tool for neural networks. https://github.com/souradeep-111/sherlock
  7. 7.
    Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A.: Verified inference of feedback control systems using feedforward neural networks. Draft (2017). Available upon requestGoogle Scholar
  8. 8.
    Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68167-2_19
  9. 9.
    Gurobi Optimization, Inc.: Gurobi optimizer reference manual (2016)Google Scholar
  10. 10.
    Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. CoRR, abs/1610.06940 (2016)Google Scholar
  11. 11.
    IBM ILOG Inc.: CPLEX MILP Solver (1992)Google Scholar
  12. 12.
    Julian, K., Kochenderfer, M.J.: Neural network guidance for UAVs. In: AIAA Guidance Navigation and Control Conference (GNC) (2017)Google Scholar
  13. 13.
    Kahn, G., Zhang, T., Levine, S., Abbeel, P.: Plato: policy learning using adaptive trajectory optimization. arXiv preprint arXiv:1603.00622 (2016)
  14. 14.
    Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-63387-9_5 CrossRefGoogle Scholar
  15. 15.
    Katz, et al.: Reluplex: CAV 2017 prototype (2017). https://github.com/guykatzz/ReluplexCav2017
  16. 16.
    Kurd, Z., Kelly, T.: Establishing safety criteria for artificial neural networks. In: Palade, V., Howlett, R.J., Jain, L. (eds.) KES 2003. LNCS (LNAI), vol. 2773, pp. 163–169. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45224-9_24 CrossRefGoogle Scholar
  17. 17.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  18. 18.
    Lomuscio, A., Maganti, L.: An approach to reachability analysis for feed-forward relu neural networks. CoRR, abs/1706.07351 (2017)Google Scholar
  19. 19.
    Luenberger, D.G.: Optimization by Vector Space Methods. Wiley, Hoboken (1969)Google Scholar
  20. 20.
    Mitchell, J.E.: Branch-and-cut algorithms for combinatorial optimization problems. In: Handbook of Applied Optimization, pp. 65–77 (2002)Google Scholar
  21. 21.
    Papernot, N., McDaniel, P.D., Goodfellow, I.J., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against deep learning systems using adversarial examples. CoRR, abs/1602.02697 (2016)Google Scholar
  22. 22.
    Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-14295-6_24 CrossRefGoogle Scholar
  23. 23.
    Pulina, L., Tacchella, A.: Challenging SMT solvers to verify neural networks. AI Commun. 25(2), 117–135 (2012)MathSciNetMATHGoogle Scholar
  24. 24.
    Sassi, M.A.B., Bartocci, E., Sankaranarayanan, S.: A linear programming-based iterative approach to stabilizing polynomial dynamics. In: Proceedings of IFAC 2017. Elsevier, Amsterdam (2017)Google Scholar
  25. 25.
    Scheibler, K., Winterer, L., Wimmer, R., Becker, B.: Towards verification of artificial neural networks. In: MBMV Workshop, pp. 30–40 (2015)Google Scholar
  26. 26.
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. CoRR, abs/1312.6199 (2013)Google Scholar
  27. 27.
    Tjeng, V., Tedrake, R.: Verifying neural networks with mixed integer programming. CoRR, abs/1711.07356 (2017)Google Scholar
  28. 28.
    Vanderbei, R.J.: Linear Programming: Foundations & Extensions, Second edn. Springer, Heidelberg (2001). Cf. http://www.princeton.edu/~rvdb/LPbook/
  29. 29.
    Williams, H.P.: Model Building in Mathematical Programming, 5th edn. Wiley, Hoboken (2013)MATHGoogle Scholar
  30. 30.
    Xiang, W., Tran, H.-D., Johnson, T.T.: Output reachable set estimation and verification for multi-layer neural networks. CoRR, abs/1708.03322 (2017)Google Scholar
  31. 31.
    Xiang, W., Tran, H.-D., Rosenfeld, J.A., Johnson, T.T.: Reachable set estimation and verification for a class of piecewise linear systems with neural network controllers (2018). To Appear in the American Control Conference (ACC), Invited Session on Formal Methods in Controller SynthesisGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of ColoradoBoulderUSA
  2. 2.SRI InternationalMenlo ParkUSA

Personalised recommendations