Skip to main content
Log in

Learning safe neural network controllers with barrier certificates

  • Original Article
  • Published:
Formal Aspects of Computing

Abstract

We provide a new approach to synthesize controllers for nonlinear continuous dynamical systems with control against safety properties. The controllers are based on neural networks (NNs). To certify the safety property we utilize barrier functions, which are represented by NNs as well. We train the controller-NN and barrier-NN simultaneously, achieving a verification-in-the-loop synthesis. We provide a prototype tool nncontroller with a number of case studies. The experiment results confirm the feasibility and efficacy of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ames AD, Coogan S, Egerstedt M, Notomista G, Sreenath K, Tabuada P (2019) Control barrier functions: theory and applications. In: 2019 18th European control conference (ECC), pp 3420–3431

  2. Ahmadi M, Singletary A, Burdick JW, Ames AD (2019) Safe policy synthesis in multi-agent POMDPs via discrete-time barrier functions. In: 2019 IEEE 58th conference on decision and control (CDC). IEEE, pp 4797–4803

  3. Berkenkamp F, Turchetta M, Schoellig AP, Krause A (2017) Safe model-based reinforcement learning with stability guarantees. In: Proceedings of the 31st international conference on neural information processing systems, NIPS'17. Curran Associates Inc., Red Hook, NY, USA, pp 908–919

  4. Choi J, Fernando C, Tomlin CJ, Sreenath K (2020) Reinforcement learning for safety-critical control under model uncertainty, using control Lyapunov functions and control barrier functions. https://arxiv.org/abs/2004.07584

  5. Cheng R, Orosz G, Murray RM, Burdick JW (2019) End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks. In: The thirty-third AAAI conference on artificial intelligence, AAAI 2019. AAAI Press, Honolulu, Hawaii, USA, January 27–February 1, 2019, pp 3387–3395

  6. Chang Y-C, Roohi N, Gao S (2019) Neural lyapunov control. In: Advances in neural information processing systems 32. Curran Associates Inc., pp 3245–3254

  7. Duan Y, Chen X, Houthooft R, Schulman J, Abbeel P (2016) Benchmarking deep reinforcement learning for continuous control. In: Proceedings of the 33nd international conference on machine learning, ICML 2016, New York City, NY, USA, June 19–24, 2016, volume 48 of JMLR workshop and conference proceedings, pp 1329–1338. JMLR.org

  8. Dutta S, Chen X, Sankaranarayanan S (2019) Reachability analysis for neural feedback systems using regressive polynomial rule inference. In: Proceedings of the 22nd ACM international conference on hybrid systems: computation and control, HSCC, pp 157–168

  9. Dreossi T, Fremont DJ, Ghosh S, Kim E, Ravanbakhsh H, Vazquez-Chanlatte M, Seshia SA (2019) VerifAI: a toolkit for the formal design and analysis of artificial intelligence-based systems. In: Computer aided verification. Springer International Publishing, pp 432–442

  10. Dai, L., Gan, T., Xia, B., Zhan, N.: Barrier certificates revisited. J Symb Comput 80, 62–86 (2017)

    Article  Google Scholar 

  11. Dutta S, Jha S, Sankaranarayanan S, Tiwari A (2018) Learning and verification of feedback control systems using feedforward neural networks. IFAC-PapersOnLine 51(16):151–156. 6th IFAC conference on analysis and design of hybrid systems ADHS 2018

  12. Dutta S, Jha S, Sankaranarayanan S, Tiwari A (2018) Output range analysis for deep feedforward neural networks. In: NASA formal methods. Springer International Publishing, pp 121–138

  13. Deshmukh JV, Kapinski J, Yamaguchi T, Prokhorov D (2019) Learning deep neural network controllers for dynamical systems with safety guarantees: Invited paper. In: 2019 IEEE/ACM international conference on computer-aided design (ICCAD), pp 1–7

  14. Fulton N, Platzer A (2018) Safe reinforcement learning via formal methods: toward safe control through proof and learning. In: Proceedings of the thirty-second AAAI conference on artificial intelligence, (AAAI-18). AAAI Press, New Orleans, Louisiana, USA, February 2–7, 2018, pp 6485–6492

  15. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. The MIT Press (2016)

    MATH  Google Scholar 

  16. Hespanha JP (2018) Linear systems theory. Princeton University Press, second edition

  17. Ivanov R, Carpenter TJ, Weimer J, Alur R, Pappas GJ, Lee I (2020) Case study: verifying the safety of an autonomous racing car with a neural network controller. In: HSCC '20: 23rd ACM international conference on hybrid systems: computation and control, Sydney, New South Wales, Australia, April 21–24, 2020. ACM, pp 28:1–28:7

  18. Ivanov R, Weimer J, Alur R, Pappas GJ, Lee I (2019) Verisig: verifying safety properties of hybrid systems with neural network controllers. In: Proceedings of the 22nd ACM international conference on hybrid systems: computation and control, HSCC 2019. pp 169–178

  19. Jordan M, Dimakis AG (2020) Exactly computing the local Lipschitz constant of ReLU networks. https://arxiv.org/abs/2003.01219

  20. Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ (2017) Reluplex: an efficient smt solver for verifying deep neural networks. In: International conference on computer aided verification. Springer, pp 97–117

  21. Kong H, He F, Song X, Hung WNN, Gu M (2013) Exponential-condition-based barrier certificate generation for safety verification of hybrid systems. In: Proceedings of the 25th international conference on computer aided verification (CAV). Springer, pp 242–257

  22. Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D (2016) Continuous control with deep reinforcement learning. In: 4th International conference on learning representations, ICLR 2016, San Juan, Puerto Rico, May 2–4, 2016, Conference Track Proceedings

  23. Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw 6(6), 861–867 (1993)

    Article  Google Scholar 

  24. Li J, Liu J, Yang P, Chen L, Huang X, Zhang L (2019) Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: Static analysis. Springer International Publishing, pp 296–319

  25. Mittal M, Gallieri M, Quaglino A, Salehian SSM, Koutník J (2020) Neural lyapunov model predictive control. https://arxiv.org/abs/2002.10451

  26. Nguyen T, Antonopoulos T, Ruef A, Hicks M (2017) Counterexample-guided approach to finding numerical invariants. In: Proceedings of the 2017 11th joint meeting on foundations of software engineering, ESEC/FSE 2017. Association for Computing Machinery, New York, NY, USA, pp 605–615

  27. Peruffo A, Ahmed D, Abate A (2020) Automated and formal synthesis of neural barrier certificates for dynamical models. https://arxiv.org/abs/2007.03251

  28. Poznyak, A., Sanchez, E.N., Yu, W.: Differential neural networks for robust nonlinear control. World Scientific (2001)

    Book  Google Scholar 

  29. Prajna, S., Jadbabaie, A., Pappas, G.J.: A framework for worst-case and stochastic safety verification using barrier certificates. IEEE Trans Autom Control 52(8), 1415–1429 (2007)

    Article  MathSciNet  Google Scholar 

  30. Pulina L, Tacchella A (2010) An abstraction-refinement approach to verification of artificial neural networks. In: Computer aided verification, pp 243–257

  31. Ray A, Achiam J, Amodei D (2019) Benchmarking safe exploration in deep reinforcement learning. https://cdn.openai.com/safexp-short.pdf

  32. Ratschan, S.: Converse theorems for safety and barrier certificates. IEEE Trans Autom Control 63(8), 2628–2632 (2018)

    Article  MathSciNet  Google Scholar 

  33. Richards SM, Berkenkamp F, Krause A (2018) The lyapunov neural network: adaptive stability certification for safe learning of dynamic systems. http://arxiv.org/abs/1808.00924

  34. Ratschan, S., She, Z.: Safety verification of hybrid systems by constraint propagation-based abstraction refinement. ACM Trans Embed Comput Syst 6(1), 1–23 (2007)

    Article  Google Scholar 

  35. Ratschan, S., She, Z.: Providing a basin of attraction to a target region of polynomial systems by computation of Lyapunov-like functions. SIAM J Control Optim 48(7), 4377–4394 (2010)

    Article  MathSciNet  Google Scholar 

  36. Ravanbakhsh, H., Sankaranarayanan, S.: Learning control Lyapunov functions from counterexamples and demonstrations. Auton Robots 43(2), 275–307 (2019)

    Article  Google Scholar 

  37. Sogokon A, Ghorbal K, Tan YK, Platzer A (2018) Vector barrier certificates and comparison systems. In: Formal methods, pp 418–437

  38. Sun X, Khedr H, Shoukry Y (2019) Formal verification of neural network controlled autonomous systems. In: Proceedings of the 22nd ACM international conference on hybrid systems: computation and control, HSCC 2019. pp 147–156

  39. She Z, Li M (2020) Over- and under-approximations of reachable sets with series representations of evolution functions. IEEE Trans Autom Control

  40. Sloth C, Pappas GJ, Wisniewski R (2012) Compositional safety analysis using barrier certificates. In: Proceedings of the hybrid systems: computation and control (HSCC). ACM, pp 15–24

  41. Taylor AJ, Dorobantu VD, Le Hoang M, Yue Y, Ames AD (2019) Episodic learning with control Lyapunov functions for uncertain robotic systems. In: 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 6878–6884

  42. Telgarsky M (2017) Neural networks and rational functions. In: Proceedings of the 34th international conference on machine learning—volume 70, ICML'17, pp 3387–3393. JMLR.org

  43. Tuncali CE, Kapinski J, Ito H, Deshmukh JV (2018) Invited: Reasoning about safety of learning-enabled components in autonomous cyber-physical systems. In: 2018 55th ACM/ESDA/IEEE design automation conference (DAC), pp 1–6

  44. Taylor A, Singletary A, Yue Y, Ames A (2019) Learning for safety-critical control with control barrier functions. https://arxiv.org/abs/1912.10099

  45. Tran H-D, Yang X, Lopez DM, Musau P, Nguyen LV, Xiang W, Bak S, Johnson TT (2020) NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: Computer aided verification. Springer International Publishing, pp 3–17

  46. Wisniewski, R., Sloth, C.: Converse barrier certificate theorems. IEEE Trans Autom Control 61(5), 1356–1361 (2016)

    Article  MathSciNet  Google Scholar 

  47. Weng T-W, Zhang H, Chen H, Song Z, Hsieh C-J, Daniel L, Boning DS, Dhillon IS (2018) Towards fast computation of certified robustness for relu networks. In: Proceedings of the 35th international conference on machine learning, ICML 2018, pp 5273–5282

  48. Xiang, W., Tran, H.-D., Johnson, T.T.: Output reachable set estimation and verification for multilayer neural networks. IEEE Trans Neural Netw Learn Syst 29(11), 5777–5783 (2018)

    Article  MathSciNet  Google Scholar 

  49. Yaghoubi S, Fainekos G, Sankaranarayanan S (2020) Training neural network controllers using control barrier functions in the presence of disturbances. https://arxiv.org/abs/2001.08088

  50. Zhu H, Xiong Z, Magill S, Jagannathan S (2019) An inductive synthesis framework for verifiable reinforcement learning. In: Proceedings of the 40th ACM SIGPLAN conference on programming language design and implementation, PLDI 2019. Association for Computing Machinery, New York, NY, USA, pp 686–701

  51. Zhao H, Zeng X, Chen T, Liu Z, Woodcock J (2020) Learning safe neural network controllers with barrier certificates. In: Dependable software engineering. Theories, tools, and applications. Springer International Publishing, Cham, pp 177–185

  52. Zhao H, Zeng X, Chen T, Liu Z (2020) Synthesizing barrier certificates using neural networks. In: HSCC '20. ACM, pp 25:1–25:11

Download references

Acknowledgements

We thank the anonymous reviewers for their valuable comments on the earlier versions of this paper, and thank Prof. Jyotirmoy V. Deshmukh for the explanation on the bicycle model of Example 5.3. H. Zhao was supported partially by the National Natural Science Foundation of China (No. 61702425, 61972385); X. Zeng was supported partially by the National Natural Science Foundation of China (No. 61902325), and “Fundamental Research Funds for the Central Universities" (SWU117058); T. Chen is partially supported by NSF Cgrant (No. 61872340), and Guangdong Science and Technology Department grant (No. 2018B010107004), the Overseas Grant of the State Key Laboratory of Novel Software Technology (No. KFKT2018A16), the Natural Science Foundation of Guangdong Province of China (No. 2019A1515011689); Z. Liu was supported partially by the National Natural Science Foundation of China (No. 62032019, 61672435, 61732019, 61811530327), and Capacity Development Grant of Southwest University (SWU116007); J. Woodcock was partially supported by the research grant from Southwest University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xia Zeng.

Additional information

Xiaoping Chen, Ji Wang and Cliff Jones

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, H., Zeng, X., Chen, T. et al. Learning safe neural network controllers with barrier certificates. Form Asp Comp 33, 437–455 (2021). https://doi.org/10.1007/s00165-021-00544-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00165-021-00544-5

Keywords

Navigation