Keywords

1 Introduction

Ensuring the safe operation of self-driving cars requires controlling the vehicle by software, crafted by numerous developers utilizing complex architectures, various programming languages, middleware etc. Automating the validation and verification of this software is crucial for certification and a rapid release cycle. However, proving safety for all possible driving scenarios in the allowed operation domain has proven to be very challenging. A safe Adaptive Cruise Controller (ACC) has to keep a suitable distance to all relevant target objects, such that the automated driving vehicle maintains a safe distance even in the presence of uncertainties. Interestingly, some common ACC solutions have been shown to be unstable with respect to formal models [43]. To guarantee the safe operation of a controller, in practice, one often resorts to a tailored redundant architecture, as well as exhaustive simulation and testing. While the latter methods can be performed automatically even when the controllers under test utilize Deep Neural Networks (DNNs) [46], they are neither sound (i.e., some bugs may be missed) nor complete (i.e., not every bug report corresponds to a real bug).

To address this limitation of current ACC verification approaches, we propose a formal framework for the development-time safety verification of adaptive cruise controllers using set-invariance methods. We model the motion of the vehicle and the relevant target object to keep a safe distance from as a discrete-time linear system subject to bounded control inputs. We specify the safety requirements as infinite-time collision avoidance, while restricting the cruise speed of the vehicles to suitable ranges. This allows us to define an operation set corresponding to the safety specification. Then, we compute a safe set within the operation set based on a Controlled Invariant Set (CIS) for the discrete-time linear system. The CIS is used to verify (or falsify) the closed-loop operation of several ACC implementations offline. For traditional controllers, a Bounded Model Checker (BMC) is utilized to prove safety. For DNN-based controllers, we propose a new hybrid verification approach based on decomposition: we verify the deployment code (loading the DNN, DNN-based inference, etc.) using a BMC, and the actual DNN with a dedicated neural network verification tool (e.g., Marabou [25]). Our case study shows that the proposed framework can verify (or falsify) the safety of both traditional and DNN-based ACC implementations within minutes on a standard workstation. The considered controllers are commonly employed in contemporary industrial and research approaches [13] for automated driving, including a model predictive controller [31] with over 5500 lines of code and a DNN-based controller [50].

The remainder of the paper is organized as follows. After summarizing related work (Sect. 2), the problem formulation for model-based safety verification of ACC is provided (Sect. 3). In Sect. 4, we present the verification framework based on set invariance. Then, we provide experimental results with several automated driving controllers (Sect. 5), followed by a discussion and conclusions (Sect. 6).

2 Related Work

While formal methods have been widely applied in the context of automated driving [30], the main focus so far has been on the behavior of an autonomous vehicle, rather than specific software code deployed in the vehicle. Formal verification of traditionally implemented controllers has been addressed, e.g., by utilizing model checking [29], counter-example-guided searching [44] or reachability analysis [2]. To avoid the need for verification, correct-by-design cruise control policies for longitudinal motion of platoons of autonomous vehicles have been synthesized using set invariance, which guarantee infinite-time collision avoidance [40]. As determining safe state sets is a computationally demanding task, online monitoring approaches have also been proposed as more scalable options [22, 26, 36]. None of these methods perform safety verification or monitoring at code level, but rather use models of the code which might miss verification-relevant aspects of the implementation.

Many methods for input-output robustness certification of DNNs have also been proposed in recent years, including feed-forward multi-layer [18], deep feed-forward [24], and convolutional neural networks [14]. Formal proofs of closed-loop safety have also been obtained for DNN-based controllers and various system types, e.g., [8, 20, 21, 28, 39, 41, 45]; however, these methods rely on either approximations or abstraction of the to-be-verified controller or the system, and thus, tend to scale poorly with a growing complexity of the system. Properties for DNN-based perception components can also be verified using probabilistic analysis, e.g., for guiding airplanes on taxiways [35]. Satisfiability Modulo Theory (SMT) solvers have also been used for the automatic verification of DNNs with respect to safety properties in cyber-physical systems by using a dedicated interval constraint propagation [16] or by translating the closed-loop system into an SMT formula [42]. However, also in the context of DNN-based controllers safety verification has not been addressed for the deployed code.

Safety verification at code level has been addressed for an automated driving supervisor by automatically obtaining a finite discrete abstraction [32]. However, finite discrete abstraction approaches are not directly applicable to continuous controllers. The safety of adaptive cruise controller implementations was assessed by using temporal logic specifications embedded as monitors along with their execution, which can be checked using bounded model checking [33]. However, such methods are not guaranteed to terminate in a reasonable amount of time for every implementation. Instead of providing arguments for absolute correctness, the test coverage of automated driving functions, mostly provided by manual test drives and simulation runs in practice, can be extended by searching for specification counter-examples for the implementation utilizing reinforcement learning in simulation [11], by sampling initial conditions from the boundary of a controlled invariant set [6], or by (mostly) automatically extracting higher-level logic models from code [27], thus enabling exhaustive analysis for identifying potential errors prior to deployment. While these methods avoid human bias inherent to manual testing and can discover corner cases that may be overlooked otherwise, infinite automatic abstraction-based approaches are not guaranteed to scale well for all systems and cannot provide safety guarantees for the complete desired operation domain of the controller.

3 Problem Statement

Fig. 1.
figure 1

A common longitudinal vehicle guidance architecture, adopted from [33]. The ACC provides a desired acceleration a to the lower level controllers, which convert it to engine, brake and transmission signals for the vehicle actuators.

We consider a common adaptive cruise controller system architecture, as for example used in [48] and already considered in [33], that is shown in Fig. 1. The driver activates or deactivates ACC, and provides a desired velocity \(v_d\) and a desired time headway \(t_{h_d}\). The desired velocity is the target velocity for the automated driving vehicle (also referred to as the ego vehicle). The time headway is the amount of time after which the target object and the ego vehicle will collide, given the current distance, or headway h, when the target object suddenly stops and the ego vehicle maintains its original velocity. Formally, the time headway \(t_{h}\) is the headway h over the current ego velocity v, i.e., \(t_h=h/v\). The desired time headway \(t_{h_d}\) to the target object corresponds to the relative distance that eventually needs to be maintained. The information about the target object is measured by sensors such as radars, cameras or a lidar. This information is utilized in the adaptive cruise controller to produce the desired acceleration for reaching desired states for the ego vehicle. The acceleration commands from the ACC are used by the lower level controller such as the engine control unit and/or power train, the transmission controller and the brake controller.

Deriving an analytical specification suitable for formal verification is a challenging task for the complete chain of effects from the sensors all the way to the actuators of a vehicle. The focus of this work is on verifying the safe closed-loop behavior of ACC software implementations (Fig. 1). Therefore, perfect sensing and ideal lower level controllers are assumed. The analytical specification is based on a model of the vehicle’s relevant longitudinal dynamics [37]. Non-linear force-balance equations are combined with exact feedback linearization to compensate non-linearities for the low level chain of effects [19]. Inspired by [34] \(x=[v,v_T,h]^T\) is assumed as an overall state of the system and the linearized vehicle motion is described by \(\dot{v}=a, \dot{v_T}=a_T\) and \(\dot{h}=v_T-v\). These continuous differential equations are transformed into discrete time difference equations by exact discretization with an equidistant sampling time \(t_s\). The continuous state variable x is replaced by the corresponding discrete time version \(x_t\) at a discrete time instant t. Further, a zero-order hold is used at a time instant t for the duration of \(t_s\) for a and \(a_T\), which are denoted by \(a_t\) and \(a_{T,t}\) in the discrete time domain. Thus, the assumed analytical specification is

$$\begin{aligned} \begin{aligned} \varSigma : x_{t+1}{=} A x_t + B u_t{=}\left[ {\begin{array}{ccc} 1 & 0& 0 \\ 0 & 1 & 0\\ -t_s& t_s& 1\\ \end{array} } \right] x_t{+}\left[ {\begin{array}{c} t_s \\ 0 \\ -0.5 t_s^2\\ \end{array} } \right] a_t {+}\left[ {\begin{array}{c} 0\\ t_s \\ 0.5 t_s^2\\ \end{array} } \right] a_{T,t}. \end{aligned} \end{aligned}$$
(1)

Without loss of generality, both the ego acceleration a and the target object acceleration \(a_T\) are bounded at all times, forming the set

$$\begin{aligned} O_u=\{a\in [a_{{min}},a_{{max}}] \wedge a_T\in [a_{{min}},a_{{max}}]\}. \end{aligned}$$
(2)

In accordance with the relevant ISO Standard [1], the ACC computes a, so that either the ego vehicle velocity v reaches the driver desired velocity \(v_d\), or so that the headway h to the target object driving with velocity \(v_T\) stays above a specified minimal value \(h_{min}\) and the current time headway stays above a specified minimal time headway \(t_{h_{min}}\). If \(v_d< h/t_{h_d}\), the target object is irrelevant, and the only safety requirement is given by (2). Since (2) can be guaranteed by a simple limiter, in this work we focus on the so called time gap or keep distance operation of ACC with a corresponding operational set

$$\begin{aligned} O_c=\{v_d\ge h/t_{h_d} \wedge h\ge h_{min} \wedge h/v \ge t_{h_{min}}\}. \end{aligned}$$
(3)

Consider an implementation of the adaptive cruise controller (Fig. 1) in a general purpose programming language, e.g., C/C++, possibly containing a neural network, providing the acceleration \(a_t\) based on the state \(x_t\) of the analytical specification (1). The controller is assumed to be time-invariant and deterministic. We study how to verify that the controller implementation provides only control signals \(a_t\) for (1), such that their closed-loop operation always remains in (2) and (3) for all driver parameters \(v_d\) and \(t_{h_d}\), and states \(x_t\).

Fig. 2.
figure 2

Verification framework: based on the operation set O and the analytical specification \(\varSigma \), the set S is used to check the safe closed-loop operation of ACC.

4 Framework

We propose a verification framework based on set invariance, as shown in Fig. 2. For a dynamical system, a set is invariant if every trajectory starting in this set remains in it for all times. For control systems, this means finding a control signal which is able to render a set invariant, i.e., a controlled invariant set. If all control signals produced by the controller yield states within the safe set, the controller can be certified as safe with respect to the analytical specification and the operation set. Thus, we compute the CIS with the analytical specification (1) for the operation sets (2) and (3). This allows us to obtain a corresponding safe set (denoted by the dashed area in Fig. 2) and effectively transform checking safety over (in)finite simulation traces over time into a set containment problem. Practically, set containment at code level is then accomplished by utilizing a state-of-the-art BMC. We first turn to obtaining the safe set.

4.1 Computing the Safe Set

The sets (2) and (3) can be readily encoded by means of the ego vehicle velocity v, the target object velocity \(v_T\) and the headway h, all contained in the state x. The operation set of the ACC is the union of (2) and (3) represented by a convex polytope over states and inputs, i.e.,

$$\begin{aligned} \begin{aligned} O= O_c \cup O_u. \end{aligned} \end{aligned}$$
(4)

Then, for (1) and (4) we compute the CIS (or an under-approximation thereof) as a polytope \(S_{CIS}\) represented by a finite number of inequalities \(N_{S}\) with corresponding matrices \(A_x^c\) and \(B_x^c\), i.e., \(S_{CIS}=\{x|A^c_x x \le B^c_x\}, A^c_x\in \mathbb {R}^{N_{S} \times 3}, B^c_x\in \mathbb {R}^{N_{S}}\). Note that \(S_{CIS}\) is a subset of the operation set O, i.e., \(S_{CIS}\subseteq O\). For any state \(x_t\in S_{CIS}\) at time t, there exists at least one admissible control action \(a_t\) and target object acceleration \(a_{T,t}\) with \([a_t,a_{T,t}]'\in O_u\), such that the following state \(x_{t+1}\) according to (1) remains within \(S_{CIS}\). Thus, to verify that a controller is safe, for any state \(x_t\in S_{CIS}\) we check that it produces an action, that yields a following state according to (1) within \(S_{CIS}\). As (1) is linear and \(S_{CIS}\) is a polytope, the following safe state set \(S_{t+1}\) is also described by a polytope \(S_{t+1} = \{(x,u)|A^c_x (A x +B u) \le B^c_x\}\). Thus, the overall safe set comprises the union of the invariant set, the following safe state set and the admissible action region, i.e.,

$$\begin{aligned} \begin{aligned} S= S_{CIS} \cup S_{t+1}\cup O_u. \end{aligned} \end{aligned}$$
(5)
Fig. 3.
figure 3

CISs in 3D and 2D and volumes for look-ahead times \(l = 1, V_1 = 5,1.10^{4}\) (blue), \(l = 3, V_3=8,8.10^{4}\) (red), and \(l = 5, V_5 = 1,2.10^{5}\) (green). (Color figure online)

We use the method from [3] to compute a sequence of CISs with non-decreasing volume depending on a specified look-ahead time l. Figure 3 illustrates how the sets induced by each longer look-ahead time contain the ones induced by a shorter look-ahead time. This hierarchical relation allows to compute the CIS in closed-form in the original state space at the price of an increased number of inequalities. Note that our framework is compatible with any invariant set computation method which provides an under-approximation of the actual invariant set of the analytical specification.

4.2 Verification of Controller Implementations

A controller implementation can be deemed safe when it operates only within the bounded domain of the CIS for the analytical specification. Executions of a to-be-verified controller are checked against the safe set using a bounded model checker. By using a BMC the code is also implicitly checked for implementation and security flaws like integer overflows, out of bound array access, illegal pointer de-references etc., in addition to checking safety. We utilize a BMC to check if for any possible point in the safe set S, a control output is produced by the ACC under test that yields a following state, which is also inside of S. Since the driver parameters \(v_d\) and \(t_{h_d}\) may take integer values in their respective ranges, verification can be performed individually for each of the possible combinations in parallel. For simplicity, we consider a fixed pair of \(v_d\) and \(t_{h_d}\) in the following. The continuous variables \(x_t\) and \(a_{T,t}\) are chosen freely within S by the BMC using assume statements. Based on these variables, the controller produces an output \(a_t\), and the safety properties S are evaluated as a set containment check using an assert statement.

Due to the aforementioned hierarchical relation of the CIS computed with [3] for different lookahead times, a controller deemed safe for a lookahead time l is safe for all lookahead times \(l_1\) with \(1\le l_1\le l\). As higher look-ahead times l increase the size of the CIS they also increase the probability to discover counterexamples. Our approach also generalizes for maximal CIS, i.e., when there exists no higher lookahead time l with a larger corresponding CIS. In fact, as (1) is a discrete-time linear system without disturbances, the maximal controlled invariant set, which contains all possible safe scenarios, can be approximated well (e.g. using [17]), at the price of higher complexity of \(S_{CIS}\). Since most CIS computation approaches provide an under-estimation of the actual CIS, the proposed verification procedure is sound. If the computed CIS is exact, the procedure is also complete.

4.3 Verification Decomposition for Large Neural Network Controllers

As BMCs enumerate possible branches during state space exploration, verification for large neural networks is not guaranteed to terminate in a reasonable amount of time. To overcome this problem, we use a three-step decomposition approach. First, states are selected based on a heuristic criterion which ensures that a representative set within the operational domain is tested for package deployment. For this finite set of individual states \(x_t\) in S, the controller output is checked to remain in S. This provides assurance that the input-output behavior of the overall controller, including the supporting code, performs as expected before doing the DNN verification.

Next, we take inspiration from recent research [9, 10] that proposes to verify a large DNN by initially reducing it to a simpler, smaller DNN for verification (abstraction), and iteratively making it more complex as needed (refinement). Therefore, we replace the original DNN with a simpler DNN with the same inputs and outputs, and use BMC to check that all code required for operational deployment (DNN model loading, DNN inference, etc.) works as expected. For our purposes, we assume that an abstraction approach was used to obtain the smaller DNN, and that the supporting code needed for the original DNN is being fully executed for the deployment of the smaller DNN. This ensures that the supporting code is checked for implementation flaws like integer overflows, out of bound access etc. In this case, the operation is not required to always remain in S, since the smaller DNN model might not fulfil the safety specification.

As a final step, we use an off-the-shelf DNN verifier to check if, for any point in S, a control \(a_t\) is produced by the original DNN that together with (1) yields a following state in S. Only if the verification is successful in all three steps, the safety specification is considered as verified.

5 Experiments

We applied the proposed verification framework to the following four common classes of adaptive cruise controllers, which are widely used for automated driving both in simulation environments and in industrial applications:

  1. 1.

    A switching proportional controller (PC) with the gain \(k_P=3\) and

    $$\begin{aligned} a_t = k_P (v_t - \text {min}(v_{d}, h_t/t_{h_d})), \end{aligned}$$

    where the min-part takes care of the time gap and adapt speed modes.

  2. 2.

    A Nonlinear Controller (NC) known as the Intelligent Driver Model [47]:

    $$\begin{aligned} \begin{aligned} a_t&=a_{max}\left( 1-\left( \frac{v_{t}}{v_d}\right) ^\delta -\left( \frac{d(x_t)}{d_{T,t}}\right) ^2\right) ,\\ d(x_t)&=h_{t}+v_{t} t_{h_d}+\frac{v_{t} (v_{t}-v_{T,t})}{2\sqrt{a_{max} a_{com}}}, \end{aligned} \end{aligned}$$

    where \(d_{T,t}=1.8 \tilde{v}_t\) is the desired distance between the two vehicles, which is around half of the current ego vehicle’s velocity \(\tilde{v}_t\) in km/h (the recommended minimum distance according to German traffic rules) and \(a_{com}=1.5\,m/s^2\) is the absolute value of the comfortable acceleration.

  3. 3.

    A Model Predictive Controller (MPC) [31] using the model (1). The target object keeps \(a_{T}=0\) throughout the optimization horizon with \(N=5\) samples. With the initial state \(x_{\tilde{t}}|_{\tilde{t}=0}\), the following quadratic program is solved at each state \(x_t\):

    $$\begin{aligned} \begin{aligned} \text {min}_{a_{\tilde{t}}} &\sum _{\tilde{t}=0}^{N} (\Vert v_{\tilde{t}}-\text {min}(v_d, h_{\tilde{t}}/t_{h_d})\Vert +\Vert a_{\tilde{t}}\Vert ),\\ \text {s.t. }& \forall \tilde{t}\in [0,N],(1);\\ &\forall \tilde{t}\in [1,N], a_{T,\tilde{t}}=0; a_{\tilde{t}} \in O_u; x_{\tilde{t}} \in O;\\ &x_{\tilde{t}}|_{\tilde{t}=0}=x_t.\\ \end{aligned} \end{aligned}$$

    The controller is implemented using the Multi-Parametric Toolbox [17]. An explicit solution comprising 348 state feedback controllers over the relevant ODD space is exported to C.

  4. 4.

    A Neural Network Controller (NNC) [50], which combines imitation learning of recorded demonstrations and optimizing a reward function incorporating safety, efficiency, and comfort metrics to maximize cumulative rewards through simulation trials. Deep deterministic policy gradient (DDPG) is utilized to learn an actor network together with a critic network. We focus on verifying the actor with an input \(x_t\) and an output \(a_t\). The actor has one hidden layer with 30 neurons. For all layers, the rectified linear unit activation function was used.

5.1 Setup

The sampling time \(t_s=0.2\) s is chosen for (1). The parameters \(a_{max}=2\,\mathrm{m/s}^2\), \(a_{min}=-4\,\mathrm{m/s}^2\), \(v_t,v_{T,t}\in (0,130]\) km/h, \(h_{max}=250\) m, \(h_{min}=5\) m, \(t_{h_{min}}=1\) s are used. A desired ego velocity \(v_d=130\,km/h\) and a desired time headway \(t_{h_d} = 1.8\) s are assumed in the case study. CBMC 5.95.1 [7] is utilized as a bounded model checker with a timeout of 1 h. The analysis was performed on a standard workstation with an Intel Core i7-11850H CPU with 64 GB DDR4 RAM. Regarding the decomposition approach for neural network controllers, we first check the actual DNN for the individual states \(x_t\in \{[0,0,0],[15,5,5]\}\). Then, the supporting code is checked using the BMC with a simplified neural network with the same inputs and outputs, but only 3 neurons in the hidden layer. Finally, using the Marabou [25] DNN verifier we check the actual DNN.

Table 1. Controller falsification/verification times in [min]/[min] for controllers, for different look-ahead times l and safe sets S with corresponding number of inequalities \(N_{S}\).

5.2 Experimental Results

Computing the CIS for \(l=4\) took 1 min and computing the CIS for all \(l< 4\) took less than half a minute. As this procedure is required to be executed only once and the resulting invariant sets are reused for evaluating the safety of all controller types, we focus on the runtime for verifying the code in the following. Table 1 shows results from the development-time verification of the controllers. As the complexity of \(S_{CIS}\) increases with higher l, the verification times increase accordingly. As an increasing l increases the volume of \(S_{CIS}\), the probability to discover a counter-example also grows with higher l. Therefore, it is not surprising that NC could be verified for \(S_c\) with \(l=1\), but falsified with \(l\ge 2\). The MPC, which was designed to consider the analytical specification and operation set, was verified in all cases. Note that simply using BMC on the NNC timed out for all invariant sets. A similar result was reported in [42] for small DNNs controlling a cart-pole system, presumably caused by the non-linearity and non-invertability of the DNN. However, using our proposed decomposition, we were able to falsify the NNC for all invariant sets. Using BMC, checking the supporting code was done in 10.5 min for the auxiliary neural network, and in approx. 1 min for each individual state for the actual DNN. An extra minute was needed for the Marabou verifier to check the actual DNN.

All falsifying input-output pairs denote an insufficient ego vehicle deceleration, while the target object decelerates with \(a_{min}\) and the time headway is not large enough. While the PC decelerates slightly in this case, the NC and the NNC decelerate with nearly \(a_{min}\). For all falsified controllers, either algorithmic improvements are needed, or a dedicated supervisor component has to be introduced to guarantee safety at the price of some performance loss.

5.3 Discussion

The experimental results presented in the previous section show that our approach has several benefits. The considered ACCs were used to control traffic agents in a simulation environment. Using our method we could achieve more realistic and safe closed-loop behavior. First, we were able to find safety flaws at code level for all controllers except for the MPC (Table 1), which may have remained uncovered upon simulation- and field-based testing only. Second, the acquired falsifying samples correspond to driving scenarios that provide feedback for improving the considered ACC controllers – either by deriving additional automated tests or by revisiting algorithmic solutions. Further, applying formal verification even only to some software modules greatly supports the overall system-level analysis and design, e.g., by providing hints where a dedicated supervisor component might be required as a safety assurance measure. Third, for neural network controllers, our approach allows falsifying examples to be used to indicate possible gaps in the collected training data or undesirable biases in the current training stage. Finally, our framework can be integrated into the verification and validation process within the development of automated driving functions. As the only manual modeling step is related to deriving the analytical specification and verifying controller implementations is possible in an automated manner within minutes, our solution can be included in the continuous integration (CI) process, where it is connected to consecutive versions of the to-be-deployed controller software.

As falsification or verification was possible in all considered cases, the proposed framework is expected to be suitable for various controller types. The method is not limited to the analytical specification (1). While this paper uses a linear system for which computing a CIS is known to have polynomial complexity [38], the presented verification approach can be readily used for analytical specifications and operation sets, which allow computing a CIS. Obtaining the CIS is possible for many nonlinear systems, e.g., [12]. Even though our work focuses on ACC, its key ideas can be transferred for other cyber-physical systems.

While the presented scheme has great potential for automated safety verification of many safety-critical controllers, several limiting factors have to be mentioned.

First, the obtained verification result does not generalize for all possible real world driving scenarios and systems. Our framework provides either a proof that the controller satisfies the analytical specification or a proof of non-compliance accompanied by a counterexample. These proofs pertain to the correctness of the controller’s behavior when applied to real-world driving. However, full confidence in these statements necessitates ensuring that the analytical specification accurately reflects real-world traffic dynamics, physics, and perception/actuation mechanisms. Over-estimating possible violations of the analytical specification is preferred to under-estimating them to accurately reflect error-freeness, with the additional check of counterexamples in the actual automated driving system to eliminate false positives. While the current approach aims to over-estimate violations, further investigation is needed to understand the extent of its accuracy and the implications for proving correctness at system level.

Second, defining the operation set and the parameter set for model checking might become challenging with an increasing complexity of the analytical specification and the to-be-verified controller. Obtaining a suitable analytical specification for controller verification requires a trade-off between precision and complexity. Special hybrid systems formulations amenable for verification might be a good choice, e.g., [49].

Third, using significantly more complex dynamical system models as analytical specifications might make the computation of the CIS infeasible or the bounded model checking intractable. Even when linear models are employed, the choice of l affects the computational effort of the method [3] and poses a trade-off with respect to the size of the operational domain for verification. In general, computing controlled invariant sets for nonlinear systems is challenging. Some works employ convex approximations to reduce the computational complexity [12]. Other works exploit structural properties, e.g., for polynomial systems, to compute exact sets [4]. However, as the maximal controlled invariant set of a nonlinear system is typically non-convex, in general, the obtained CIS can be conservative. A closely related question for future exploration is whether other desired properties of the closed-loop operation of the controllers can be captured by invariants and consequently verified using our approach. For instance, in camera-based systems, a closed-loop operation objective would be to minimize unnecessary fluctuations of the control signals when the images remain stable.

Forth, the particular implementation plays a deciding role for the verification complexity. As reported in [33], even using certain C++ Standard Library functions might not be the best choice for verification. Finally, some bounded model checkers might be more suitable for verifying a particular piece of code than others. Similarly, different DNN verifiers might perform better than others for particular DNNs.

Finally, while the proposed framework provides functional safety robustness guarantees with respect to the analytical specification, we note that some DNN verifiers are not guaranteed to provide guarantees against all possible malicious network inputs and/or network architectures. For example, floating point errors have been observed to occasionally cause incorrect results with some DNN verifiers on large scale benchmarks [23]. Therefore, it is advisable to study the specific features of the used tools in detail.

6 Conclusions

We proposed an automatic safety verification approach for adaptive cruise controllers for automated driving vehicles at code level, and applied it to both traditional and neural-network-based controllers. By computing a controlled invariant set for a given analytical specification, our approach allows obtaining a safe set for the closed-loop operation, therefore enabling the verification of controller implementations by utilizing a bounded model checker. Furthermore, by proposing a three-step verification decomposition, we were able to verify a neural-network-based controller, for which off-the-shelf bounded model checkers timed out. The experimental results confirm that both traditionally implemented and neural-network-based adaptive cruise controllers can be verified offline within a time frame of minutes on a regular computer, thus emphasizing the low computational overheads of the framework for cyber-physical system controllers.

In future work, we plan to apply our approach to the verification of additional types of controllers from the automotive domain, e.g. those responsible for automated lane keeping or lane changing. In addition, we intend to extend the approach to consider uncertainties in the system model, e.g., by using probabilistic [5, 15] and/or statistical model checking.