Advertisement

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

  • Guy KatzEmail author
  • Clark Barrett
  • David L. Dill
  • Kyle Julian
  • Mykel J. Kochenderfer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10426)

Abstract

Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

Keywords

Deep Neural Networks (DNNs) Satisfiability Modulo Theories (SMT) Rectified Linear Unit (ReLU) Airborne Collision Avoidance System ReLU Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

We thank Neal Suchy from the Federal Aviation Administration, Lindsey Kuper from Intel and Tim King from Google for their valuable comments and support. This work was partially supported by a grant from Intel.

References

  1. 1.
    Barrett, C., Nieuwenhuis, R., Oliveras, A., Tinelli, C.: Splitting on demand in SAT modulo theories. In: Hermann, M., Voronkov, A. (eds.) LPAR 2006. LNCS (LNAI), vol. 4246, pp. 512–526. Springer, Heidelberg (2006). doi: 10.1007/11916277_35 CrossRefGoogle Scholar
  2. 2.
    Barrett, C., Sebastiani, R., Seshia, S., Tinelli, C.: Satisfiability modulo theories (Chap. 26). In: Biere, A., Heule, M.J.H., van Maaren, H., Walsh, T. (eds.) Handbook of Satisfiability. Frontiers in Artificial Intelligence and Applications, vol. 185, pp. 825–885. IOS Press, Amsterdam (2009)Google Scholar
  3. 3.
    Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS) (2016)Google Scholar
  4. 4.
    Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., Zieba, K.: End to end learning for self-driving cars, Technical report (2016). http://arxiv.org/abs/1604.07316
  5. 5.
    Dantzig, G.: Linear Programming and Extensions. Princeton University Press, Princeton (1963)CrossRefzbMATHGoogle Scholar
  6. 6.
    Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 315–323 (2011)Google Scholar
  7. 7.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  8. 8.
    Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples, Technical report (2014). http://arxiv.org/abs/1412.6572
  9. 9.
    Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., Kingsbury, B.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  10. 10.
    Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks, Technical report (2016). http://arxiv.org/abs/1610.06940
  11. 11.
    Jarrett, K., Kavukcuoglu, K., LeCun, Y.: What is the best multi-stage architecture for object recognition? In: Proceedings of the 12th IEEE International Conferernce on Computer Vision (ICCV), pp. 2146–2153 (2009)Google Scholar
  12. 12.
    Jeannin, J.-B., Ghorbal, K., Kouskoulas, Y., Gardner, R., Schmidt, A., Zawadzki, E., Platzer, A.: A formally verified hybrid system for the next-generation airborne collision avoidance system. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 21–36. Springer, Heidelberg (2015). doi: 10.1007/978-3-662-46681-0_2 Google Scholar
  13. 13.
    Julian, K., Lopez, J., Brush, J., Owen, M., Kochenderfer, M.: Policy compression for aircraft collision avoidance systems. In: Proceedings of the 35th Digital Avionics Systems Conference (DASC), pp. 1–10 (2016)Google Scholar
  14. 14.
    Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex (2017). https://github.com/guykatzz/ReluplexCav2017
  15. 15.
    Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: an efficient smt solver for verifying deep neural networks. Supplementary Material (2017). https://arxiv.org/abs/1702.01135
  16. 16.
    Katz, G., Barrett, C., Tinelli, C., Reynolds, A., Hadarean, L.: Lazy proofs for DPLL(T)-based SMT solvers. In: Proceedings of the 16th International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 93–100 (2016)Google Scholar
  17. 17.
    King, T.: Effective algorithms for the satisfiability of quantifier-free formulas over linear real and integer arithmetic. Ph.D. thesis, New York University (2014)Google Scholar
  18. 18.
    King, T., Barret, C., Tinelli, C.: Leveraging linear and mixed integer programming for SMT. In: Proceedings of the 14th International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 139–146 (2014)Google Scholar
  19. 19.
    Kochenderfer, M.: Optimized airborne collision avoidance. In: Decision Making Under Uncertainty: Theory and Application. MIT Press, Cambridge (2015)Google Scholar
  20. 20.
    Kochenderfer, M., Chryssanthacopoulos, J.: Robust airborne collision avoidance through dynamic programming. Project report ATC-371, Massachusetts Institute of Technology, Lincoln Laboratory (2011)Google Scholar
  21. 21.
    Kochenderfer, M., Edwards, M., Espindle, L., Kuchar, J., Griffith, J.: Airspace encounter models for estimating collision risk. AIAA J. Guidance Control Dyn. 33(2), 487–499 (2010)CrossRefGoogle Scholar
  22. 22.
    Kochenderfer, M., Holland, J., Chryssanthacopoulos, J.: Next generation airborne collision avoidance system. Linc. Lab. J. 19(1), 17–33 (2012)Google Scholar
  23. 23.
    Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  24. 24.
    Kuchar, J., Drumm, A.: The traffic alert and collision avoidance system. Linc. Lab. J. 16(2), 277–296 (2007)Google Scholar
  25. 25.
    Maas, A., Hannun, A., Ng, A.: Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of the 30th International Conference on Machine Learning (ICML) (2013)Google Scholar
  26. 26.
    Marques-Silva, J., Sakallah, K.: GRASP: a search algorithm for propositional satisfiability. IEEE Trans. Comput. 48(5), 506–521 (1999)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Nair, V., Hinton, G.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML), pp. 807–814 (2010)Google Scholar
  28. 28.
    Nieuwenhuis, R., Oliveras, A., Tinelli, C.: Solving SAT and SAT modulo theories: from an abstract Davis-Putnam-Logemann-Loveland procedure to DPLL(T). J. ACM (JACM) 53(6), 937–977 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Padberg, M., Rinaldi, G.: A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Rev. 33(1), 60–100 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-14295-6_24 CrossRefGoogle Scholar
  31. 31.
    Pulina, L., Tacchella, A.: Challenging SMT solvers to verify neural networks. AI Commun. 25(2), 117–135 (2012)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Riesenhuber, M., Tomaso, P.: Hierarchical models of object recognition in cortex. Nat. Neurosci. 2(11), 1019–1025 (1999). doi: 10.1038/14819 CrossRefGoogle Scholar
  33. 33.
    Silver, D., Huang, A., Maddison, C., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRefGoogle Scholar
  34. 34.
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks, Technical report (2013). http://arxiv.org/abs/1312.6199
  35. 35.
    Vanderbei, R.: Linear Programming: Foundations and Extensions. Springer, Heidelberg (1996)zbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Guy Katz
    • 1
    Email author
  • Clark Barrett
    • 1
  • David L. Dill
    • 1
  • Kyle Julian
    • 1
  • Mykel J. Kochenderfer
    • 1
  1. 1.Stanford UniversityStanfordUSA

Personalised recommendations