Neural networks serve as effective controllers in a variety of complex settings due to their ability to represent expressive policies. The complex nature of neural networks, however, makes their output difficult to verify and predict, which limits their use in safety-critical applications. While simulations provide insight into the performance of neural network controllers, they are not enough to guarantee that the controller will perform safely in all scenarios. To address this problem, recent work has focused on formal methods to verify properties of neural network outputs. For neural network controllers, we can use a dynamics model to determine the output properties that must hold for the controller to operate safely. In this work, we develop a method to use the results from neural network verification tools to provide probabilistic safety guarantees on a neural network controller. We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy. Next, we modify the traditional formulation of Markov decision process model checking to provide guarantees on the overapproximated policy given a stochastic dynamics model. Finally, we incorporate techniques in state abstraction to reduce overapproximation error during the model checking process. We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks that are loosely inspired by Airborne Collision Avoidance System X (ACAS X), a family of collision avoidance systems that formulates the problem as a partially observable Markov decision process (POMDP).
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Price includes VAT (USA)
Tax calculation will be finalised during checkout.
The neural networks used in this research can be found in the “networks” folder of the repository located at https://github.com/sisl/AdaptiveVerification.
The code for the adaptive verification portion of the work can be found at https://github.com/sisl/AdaptiveVerification, and the code for the model checking is located at https://github.com/sisl/NeuralModelChecking. The repository used to generate the networks used in this work is at https://github.com/sisl/VerticalCAS.
Akintunde, M., Lomuscio, A., Maganti, L., & Pirovano, E. (2018). Reachability analysis for neural agent-environment systems. In International conference on principles of knowledge representation and reasoning, pp 184–193.
Akintunde, M.E., Botoeva, E., Kouvaros, P., & Lomuscio, A. (2020). Formal verification of neural agents in non-deterministic environments. In AAMAS, pp 25–33.
Baier, C., & Katoen, J. P. (2008). Principles of model checking. MIT Press.
Bastani, O., Pu, Y., & Solar-Lezama, A. (2018). Verifiable reinforcement learning via policy extraction. arXiv preprint arXiv:180508328.
Bellman, R. (1952). On the theory of dynamic programming. Proceedings of the National Academy of Sciences of the United States of America, 38(8), 716.
Bouton, M. (2020). Safe and scalable planning under uncertainty for autonomous driving. Ph.D. thesis, Stanford University, https://purl.stanford.edu/dy440kv7606.
Bouton, M., Tumova, J., & Kochenderfer, M.J. (2020). Point-based methods for model checking in partially observable Markov decision processes. In AAAI conference on artificial intelligence (AAAI), https://aaai.org/Papers/AAAI/2020GB/AAAI-BoutonM.9314.pdf.
Carr, S., Jansen, N., & Topcu, U. (2020). Verifiable rnn-based policies for pomdps under temporal logic constraints. arXiv preprint arXiv:200205615.
Clavière, A., Asselin, E., Garion, C., & Pagetti, C. (2020). Safety verification of neural network controlled systems. arXiv preprint arXiv:201105174.
Dutta, S., Chen, X., & Sankaranarayanan, S. (2019). Reachability analysis for neural feedback systems using regressive polynomial rule inference. In ACM international conference on hybrid systems: Computation and control, pp 157–168.
D’argenio, P.R., Jeannet, B., Jensen, H.E., & Larsen, K.G. (2001). Reachability analysis of probabilistic systems by successive refinements. In Joint international workshop on process algebra and probabilistic methods, performance modeling and verification, Springer, pp 39–56.
Huang, C., Fan, J., Li, W., Chen, X., & Zhu, Q. (2019). ReachNN: Reachability analysis of neural-network controlled systems. ACM Transactions on Embedded Computing Systems (TECS), 18(5s), 1–22.
Ivanov, R., Weimer, J., Alur, R., Pappas, G.J., & Lee, I. (2019). Verisig: Verifying safety properties of hybrid systems with neural network controllers. In ACM international conference on hybrid systems: Computation and Control, pp 169–178.
Julian, K.D., & Kochenderfer, M.J. (2019a). Guaranteeing safety for neural network-based aircraft collision avoidance systems. In Digital avionics systems conference (dasc). https://doi.org/10.1109/DASC43569.2019.9081748. arXiv:org/abs/1912.07084.
Julian, K.D., & Kochenderfer, M.J. (2019b). A reachability method for verifying dynamical systems with deep neural network controllers (1903.00520), arXiv:org/abs/1903.00520.
Julian, K. D., Lopez, J., Brush, J. S., Owen, M. P., & Kochenderfer, M. J. (2016). Policy compression for aircraft collision avoidance systems. Digital Avionics Systems Conference (DASC). https://doi.org/10.1109/DASC.2016.7778091.
Julian, K. D., Kochenderfer, M. J., & Owen, M. P. (2019a). Deep neural network compression for aircraft collision avoidance systems. AIAA Journal of Guidance, Control, and Dynamics, 42(3), 598–608. https://doi.org/10.2514/1.G003724.
Julian, K.D, Sharma, S., Jeannin, J.B., & Kochenderfer, M.J. (2019b). Verifying aircraft collision avoidance neural networks through linear approximations of safe regions. In AIAA spring symposium, arXiv:org/abs/1903.00762.
Katz, G., Barrett, C., Dill, D.L., Julian, K.D., & Kochenderfer, M.J. (2017). Reluplex: An efficient SMT solver for verifying deep neural networks. In International conference on computer-aided verification. arXiv:org/abs/1702.01135.
Katz, G., Huang, D.A., Ibeling, D., Julian, K.D., Lazarus, C., Lim, R., Shah, P., Thakoor, S, Wu, H., & Zeljić, A., Dill, D. L. (2019). The marabou framework for verification and analysis of deep neural networks. In International conference on computer aided verification. Springer, pp 443–452.
Kochenderfer, M. J. (2015). Decision making under uncertainty: Theory and application. MIT Press.
Kochenderfer, M.J., & Chryssanthacopoulos, J. (2011). Robust airborne collision avoidance through dynamic programming. Massachusetts Institute of Technology, Lincoln Laboratory, Project Report ATC-371.
Kochenderfer, M. J., Holland, J. E., & Chryssanthacopoulos, J. P. (2012). Next-generation airborne collision avoidance system. Massachusetts Institute of Technology-Lincoln Laboratory Lexington United States: Tech. rep.
Koul, A., Greydanus, S., & Fern, A. (2018). Learning finite state representations of recurrent policy networks. arXiv preprint arXiv:181112530.
Lahijanian, M., Andersson, S., & Belta, C. (2011). Control of Markov decision processes from PCTL specifications. In American control conference, IEEE, pp 311–316.
Liu, C., Arnon, T., Lazarus, C., Strong, C., Barrett, C., & Kochenderfer, M. J. (2021). Algorithms for verifying deep neural networks. Foundations and Trends in Optimization, 4(3–4), 244–404. https://doi.org/10.1561/2400000035.
Lopez, D.M., Johnson, T., Tran, H.D., Bak, S., Chen, X., & Hobbs, K.L. (2021). Verification of neural network compression of ACAS Xu lookup tables with star set reachability. In AIAA Scitech Forum. p 0995.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.
Munos, R., & Moore, A. (2002). Variable resolution discretization in optimal control. Machine Learning, 49(2–3), 291–323.
Olson, W.A. (2015). Airborne collision avoidance system X. Tech. rep., Massachusetts Institute of Technology-Lincoln Laboratory Lexington United States.
Owen, M.P., Panken, A., Moss, R., Alvarez, L., & Leeper, C. (2019). ACAS Xu: Integrated collision avoidance and detect and avoid capability for UAS. In IEEE/AIAA digital avionics systems conference (DASC), pp 1–10.
Pan, Y., Cheng, C.A., Saigol, K., Lee, K., Yan, X., Theodorou, E., & Boots, B. (2017). Agile autonomous driving using end-to-end deep imitation learning. arXiv preprint arXiv:170907174.
Sidrane, C., & Kochenderfer, M.J. (2019). OVERT: Verification of nonlinear dynamical systems with neural network controllers via overapproximation. In Workshop on safe machine learning, international conference on learning representations.
Tjeng, V., Xiao, K., & Tedrake, R. (2017). Evaluating robustness of neural networks with mixed integer programming. arXiv preprint arXiv:171107356.
Wang, S., Pei, K., Whitehouse, J., Yang, J., & Jana, S. (2018). Formal security analysis of neural networks using symbolic intervals. In USENIX security symposium., pp 1599–1614, https://www.usenix.org/conference/usenixsecurity18/presentation/wang-shiqi.
Xiang, W., & Johnson, T.T. (2018). Reachability analysis and safety verification for neural network control systems. arXiv preprint arXiv:180509944.
Xiang, W., Tran, H.D., Rosenfeld, J.A., & Johnson, T.T. (2018). Reachable set estimation and safety verification for piecewise linear systems with neural network controllers. In American control conference, pp 1574–1579.
Xiang, W., Lopez, D.M., Musau, P., & Johnson, T.T. (2019). Reachable set estimation and verification for neural network models of nonlinear dynamic systems. In Safe, autonomous and intelligent vehicles. Springer, pp 123–144.
This research was supported by National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Editors: Daniel Fremont, Alessio Lomuscio, Dragos Margineantu, Cheng Soon-Ong.
About this article
Cite this article
Katz, S.M., Julian, K.D., Strong, C.A. et al. Generating probabilistic safety guarantees for neural network controllers. Mach Learn (2021). https://doi.org/10.1007/s10994-021-06065-9
- Neural network controller
- Model checking