Robustness verification of ReLU networks via quadratic programming

Neural networks are known to be sensitive to adversarial perturbations. To investigate this undesired behavior we consider the problem of computing the distance to the decision boundary (DtDB) from a given sample for a deep neural net classifier. In this work we present a procedure where we solve a convex quadratic programming (QP) task to obtain a lower bound on the DtDB. This bound is used as a robustness certificate of the classifier around a given sample. We show that our approach provides better or competitive results in comparison with a wide range of existing techniques.


Introduction
The high predictive power of neural network classifiers makes them the method of choice to tackle challenging classification problems in many areas. However, questions regarding the robustness of their performance under slight input perturbations still remain open, severely limiting the applicability of deep neural network classifiers to sensitive tasks that require certification of the obtained results.
In recent years this issue gained a lot of attention, resulting in a large variety of methods tackling tasks ranging from adversarial attacks and defenses against these to robustness verification and robust training. In this work we focus on robustness verification. That is, computing the distance from a given anchor point x 0 in the input space to its closest adversarial, i.e. a point that is assigned a different class label by the network.
Editors: Annalisa Appice, Sergio Escalera, Jose A. Gamez, Heike Trautmann. This problem plays a fundamental role in understanding the behavior of deep classifiers and essentially provides the only reliable way to assess classifier robustness. Unfortunately, its complexity class does not allow a polynomial time algorithm. For deep classifiers with ReLU activation the verification problem can equivalently be reformulated as a mixed integer programming (MIP) task and was shown to be NP-complete by Katz et al. (2017). Even worse, Weng et al. (2018) showed that an approximation of the minimum adversarial perturbation of a certain (high) quality cannot be found within polynomial time.
Related work There exist two streams of related work on robustness verification of deep ReLU classifiers. This categorization is based on whether they are solving the verification problem exactly or verifying a bound on the distance to the decision boundary (DtDB).
The first group of methods are exact verification approaches. As mentioned above, the verification task can be modeled using MIP techniques. Katz et al. (2017) present a modification of the simplex algorithm that can be used to solve the verification task exactly for smaller ReLU networks based on satisfiable modulo theory (SMT). Other approaches (Ehlers 2017) rely on SMT solvers when solving the described task. Bunel et al. (2018) provide an overview and comparison of those. Other exact methods (Dutta et al. 2018;Lomuscio and Maganti 2017;Tjeng et al. 2017) deploy MIP solvers together with presolving to find a tight formulation of the MIP problem or (Jordan et al. 2018) use an algorithm to find the largest ball around the anchor point that touches the decision boundary.
The second popular class of methods for verifying classifier robustness deals with verification of an -neighborhood: given an anchor point x 0 and an > 0 , the task is to verify whether an adversarial point exists within the neighborhood of x 0 which is defined with respect to a certain norm in the input space. All existing methods relax the initial problem and require bounds on activation inputs in each layer. These bounds should be as tight as possible to ensure good final results. Raghunathan et al. (2018a, b), Dvijotham et al. (2018Dvijotham et al. ( , 2019 consider semidefinite (SDP) and linear (LP) problems as relaxations of the -verification problem. Wong and Kolter (2018) replace ReLU constraints by linear constraints and consider the dual formulation of the obtained LP relaxation. Weng et al. (2018) present an approach that also uses linear functions (later extended to quadratic functions by Zhang et al. 2018) to deal with nonlinear activation functions and propagate the layer-wise output bounds until the final layer. Salman et al. (2019) provide a unifying framework for the approaches using neuron-wise relaxations of the activation functions and use the best possible convex relaxation. Finally, Hein and Andriushchenko (2017), Tsuzuku et al. (2018) use the Lipschitz constant of the transformations within classifier's architecture.
Our approach belongs to the same group of the inexact verifiers, but deals with constructing lower bounds on DtDB without necessarily restricting admissible adversarial points to a given neighborhood. Croce et al. (2019) leverage the piecewise affine nature of the outputs of a ReLU classifier and compute lower bounds on DtDB by assuming that the classifier behaves globally the same way it does in the linear region around the given anchor point. The -verification task is closely related to this problem since each -neighborhood that is certified as adversarial-free immediately provides a lower bound on the minimal adversarial perturbation magnitude. It is also a common strategy for the -verification methods to use a binary search or a Newton method on top of their algorithm to find the largest such that the -neighborhood around x 0 is still successfully verified as robust.
Adversarial attacks Constructing misclassified examples that are close to the anchor point can be considered as a complementary research direction to robustness verification since each adversarial example by definition provides an upper bound on the DtDB. Many methods were proposed to construct such points (Szegedy et al. 2014;Goodfellow et al. 2015;Kurakin et al. 2016;Papernot et al. 2016;Madry et al. 2017;Carlini and Wagner 2017).
Robust training The question of how to actually train a robust classifier is closely related to robustness verification since the latter might allow us to construct some type of robust loss based on the insights from the verification procedure (Hein and Andriushchenko 2017;Madry et al. 2017;Wong and Kolter 2018;Raghunathan et al. 2018a;Tsuzuku et al. 2018;Wang et al. 2018;Croce et al. 2019). We leave this direction for future work.

Contributions
1. We propose a novel relaxation of the DtDB problem in form of a QP task allowing efficient computation of high quality lower bounds on the DtDB in l 2 -norm with an extension to l ∞ -norm. We reach state-of-the-art performance for dense and convolutional networks compared to the bounds obtained from methods based on LP relaxations (CROWN by Zhang et al. 2018 andConvAdv by Wong andKolter 2018). Furthermore, our method performs much faster than methods based on SDP relaxations (Raghunathan et al. 2018b), while providing smaller lower bounds. This is a fundamental property due to the difference in computational complexity between SDP and QP tasks. 2. Unlike -verification techniques, we provide a lower bound on DtDB without an initial guess and without computing bounds for the neuron activation values in each layer. If additional information is present allowing the user to bound the distance to any admissible adversarial point from above, we incorporate these upper bounds in our formulation to verify larger regions around the anchor point. Such bounds have to be tight enough to verify non-trivial neighborhoods and play an important role in other relaxation techniques such as the SDP based approaches by Raghunathan et al. (2018b) and Dvijotham et al. (2019). We describe an efficient search method for pre-activation bounds resulting in larger verified regions based on sequential convex quadratic programming (QP). 3. To analyze the gap in the optimal objective function value between the initial DtDB problem and our relaxation we establish a connection of DtDB's dual problem to our QP task. It allows us to deconstruct this gap into two components. Moreover, we discuss how we improve the QP formulation to close the gap to DtDB and how we bound one of its components.
The remainder of this paper is organized as follows. In Sect. 2 we introduce the necessary notation. In Sect. 3.1 we formally define the problem of finding the smallest adversarial perturbation and in Sect. 3.2 introduce its QP relaxation QPRel. There we also formulate the dual DtDB problem as the best convex QP relaxation. In Sect. 3.3 we introduce additional linear constraints using bounds on the region of the admissible points around x 0 and summarize our verification procedure. In Sect. 4 we compare our approach to the LP-and SDP-based competitors. We summarize our findings in Sect. 5 and discuss the directions for future work.

Notation and idea
We consider a neural network consisting of L linear transformations representing dense, convolutional, skip or average pooling layers and L − 1 ReLU activations (no ReLU after the last hidden layer). The number of neurons in layer l is denoted as n l for l = 0, … , L , meaning that the data has n 0 features and n L classes. Furthermore, we present our analysis 1 3 for the l 2 -norm as perturbation measure since only few available methods are applicable to this setting. To make our method comparable with the approach by Raghunathan et al. (2018b) we propose a generalization to l ∞ -setting as well.
Given sample x 0 ∈ ℝ n 0 , weight matrices W l ∈ ℝ n l ×n l−1 , and bias vectors b l ∈ ℝ n l , we define the output of the ith neuron in the lth layer after the ReLU activation as where [x] + is the positive part of x and f (x 0 ) = x L denotes the output of the complete forward pass through the network. We start with the observation that for each pair of scalars x and y the following holds (also used by Raghunathan et al. 2018b;Dvijotham et al. 2019 for -verification).
This relation allows us to obtain an optimization problem with linear complementarity constraints.

Formulation of DtDB
For a given sample x 0 , pre-trained neural network f, predicted label ỹ and adversarial label y we aim to find the closest point to x 0 in ℝ n 0 that has a larger or equal probability of being classified as y compared to the initial label. This task corresponds to the following optimization problem.
where e i is the ith unit vector in ℝ n L and ‖x‖ denotes the Euclidean norm of x. To compute the distance from x 0 to the (full) decision boundary, one needs to compute the solution for all adversarial labels y = 1, … , n L except ỹ . Next we unfold the above optimization problem using (1), where x denotes a container with all variables x 0 , … , x L and [L] is the set {1, … , L} .
We apply (2) to reformulate the problem and eliminate x L , such that from now on n = n 0 + ⋯ + n L−1 and x contains only the remaining variables x 0 , … , x L−1 . (1)

QP relaxation
To get rid of the quadratic equality constraints (3) we consider a Lagrangian relaxation of DtDB: where for arbitrary vectors x 0 ∈ ℝ n 0 , … , x L−1 ∈ ℝ n L−1 and ∈ ℝ L−1 + we define as the propagation gap. The obtained problem is indeed a QP with linear constraints. We need to clarify two questions. How does the problem QPRel help us with solving DtDB and how do we solve this problem itself efficiently?

QPRel vs. DtDB
QPRel returns robust radius It follows directly from the definition of the Lagrange relaxation QPRel that for arbitrary non-negative it holds that: -if x is feasible for DtDB we have c(x, ) = 0 , meaning that x equals the vector obtained by propagating x 0 through the neural network as defined in (1), -if x is feasible for QPRel then c(x, ) ≥ 0 , meaning that there might be a slack between the true output of layer l when getting x 0 as an input and the value of x l .
In general the following holds for the relation between the solution of QPRel and DtDB (see Fig. 1). We include the proof of Lemma 1 and all other results in "Appendix B".

Lemma 1
Denote the solution of QPRel by x qp and the square root of its optimal objective value by d qp , let d be the square root of the optimal objective value of DtDB. The following holds: 1. d qp ≤ d and when c(x qp , ) = 0 we have d qp = d and x qp is optimal for DtDB. 2. d qp is monotone with respect to , that is for two non-negative 1 The first result from Lemma 1 ensures that d qp provides a radius of a certified region around the anchor point. Whereas the second part indicates that we should choose as large as possible to get our lower bound closer to DtDB. Unfortunately, as we show below, QPRel becomes non-convex for large values of . While one could try to tackle a nonconvex QP with proper optimization methods, we address conditions such that QPRel is guaranteed to be convex and can be solved efficiently next.
Convexity of QPRel To look into the problem QPRel in more detail we introduce the Hessian M (which is a constant matrix) of its objective function. Let E l ∈ ℝ n l ×n l be the identity matrix of the corresponding dimension and set 0 = 1 . We define M ∈ ℝ n×n as the symmetric block tridiagonal matrix with blocks M l,l = 2 l E l and M l,l−1 = − l W l . Using this matrix we rewrite the objective function from QPRel as (see "Appendix B", Lemma 4 for the proof and definition of the terms) where B 1 influences only the linear term and is therefore not relevant in this section. From this reformulation we clearly see that the matrix M determines the (non-)convexity of the objective function. The following theorem provides sufficient and necessary conditions on depending on the weights W l assuring that M is positive semi-definite. This allows us to use off-the-shelf QP-solvers with excellent convergence properties.
Theorem 1 Let W 1 , … , W L−1 be the weights of a pre-trained neural network and ‖W‖ the spectral norm of an arbitrary matrix. Then the following two conditions for provide correspondingly a sufficient and a necessary criterion for the matrix M to be positive semi-definite. Furthermore, we define and ̄ that correspondingly satisfy conditions (7) and (8) with equality: Finally, in case with a single hidden layer M is positive-semi definite even for =̄ from (8).
We use (7), (8) and our previous results as guidelines for the choice of . Since d qp ( ) is monotone in the sense of Lemma 1 we perform a binary search between and ̄ to find the point closest to ̄ (where QP is non-convex for networks with more than one hidden layer) such that the QP remains convex. We denote the obtained by ̂ . This preprocessing step does not considerably affect the runtime since checking whether a matrix is positive semi-definite is done efficiently by Cholesky decomposition. However, it significantly improves the final bounds compared to the bounds obtained when using = from (7).
Note that this procedure has to be done once for a given classifier. ̂ is then used to solve QPRel for all anchor points and adversarial labels. This is a significant computational advantage compared to SDP-based -verification methods. For example, Dvijotham et al. (2019) include the dual multipliers as variables in a relaxation of the SDP problem that has to be solved for each combination of the anchor point, adversarial label and verified epsilon.
Relation to the dual of DtDB Since QPRel is a Lagrangean relaxation of a non-convex quadratically constrained QP DtDB, we unavoidably have a gap between their optimal objective values, but get a simpler problem to solve in return. To investigate and approximate the components of that gap, we look onto the relation of DtDB and QPRel from the perspective of duality theory. A similar question was investigated by Salman et al. (2019) for the existing -verification methods based on neuron-wise LP-relaxations. However, our method does not fall into this category because the relaxation happens jointly for all layers.
Note, that our formulation of DtDB problem contains quadratic equality constraints (3) and therefore has a non-convex admissible set. For the derivation of its dual problem we refer to the complementary material (see "Appendix B") and summarize here the most important result.

Theorem 2 Solving the Lagrange dual problem of the non-convex DtDB is equivalent to solving the problem
where we slightly redefine the notation and write QPRel( ) for the optimal objective function value of QPRel for the corresponding . We also denote * as the optimal value of for the above problem. Now we are ready to formulate the result that provides a way to estimate how large is the difference between the optimal objective function value of QPRel for ̂ , constructed using Theorem 1, and the optimal * . The latter is defined by Theorem 2 and would provide the best bound we can get when constraining ourselves to the convex QP relaxations.
Lemma 2 Denote * as the optimal defined in Theorem 2, ̂ as we use for verification, ̄ as defined in (9), c(x, ) as the propagation gap defined in (5) and x qp as the solution of QPRel(̂) . Then we get the following upper bound on the possible improvement of QPRel's objective function for a value that is different from our ̂: In summary, we have the following relation between the values defined above, where we add -P and -D to the problem name to denote its primal and dual forms respectively: We have shown how to find a good ̂ and are able to estimate the gap resulting in the second ≥ sign as shown in Lemma 2. Additionally, in the next section we describe how to close the duality gap resulting in the first ≥ sign by introducing additional constraints to the QPRel problem.

Improving bounds via additional linear constraints
The initial DtDB problem and its relaxation QPRel do not require bounds on pre-activation values W l x l−1 + b l frequently used in -verification approaches. However, if available, these can improve our relaxation. That is, we can additionally bound the admissible set of QPRel by given some bounds a l ,ā l ∈ ℝ n l for layer l. Moreover, we include the following linear constraint on each neuron i in layer l as also widely used in other verification methods for ReLU networks (Ehlers 2017;Wong and Kolter 2018;Dvijotham et al. 2019;Salman et al. 2019).
Note that constraints (10) and (11) are linear and therefore the new relaxation is still a QP.
Before continuing the discussion how we exploit these bounds, we first introduce the notation of a proper bound propagation mapping. We need this to ensure that the resulting solution of QPRel with these additional constraints is still a lower bound on DtDB. For a fixed anchor point and network weights consider a mapping from a bound in the input layer ∈ ℝ + to the bounds a l ( ),ā l ( ) ∈ ℝ n l . We call this mapping a proper bound propagation mapping if 1. bounds are valid for all x 0 with ‖x 0 − x 0 ‖ ≤ inequalities (10) hold for the corresponding pre-activation values in each layer as defined in (1) and 2. bounds are monotone for arbitrary 1 ≤ 2 in each hidden layer l of the network there holds ā l ( 2 ) ≥ā l ( 1 ) ≥ a l ( 1 ) ≥ a l ( 2 ).
In our experiments we deploy the bound propagation technique by Wong and Kolter (2018) to obtain bounds a l ,ā l since it satisfies these properties and is computationally efficient.
Lemma 3 When using a proper bound propagation mapping, the following holds for the square root of the optimal objective function value d qp ( ) of QPRel (we drop the dependence on since it is now fixed) solved with the additional constraints (10) and (11) using pre-activation bounds a l ( ),ā l ( ) . (10) Guided by the results of Lemma 3 we apply binary search to find the smallest that is still providing us with a lower bound d qp ( ) on the smallest adversarial perturbation (the smaller the value of , the better the resulting bound). In each step we solve a convex QP and increase if QPRel is infeasible, that is current bounds a l ( ),ā l ( ) are too tight, or if d qp ( ) > since in this case we do not have a certificate for d qp ( ) to be a valid lower bound on DtDB. Otherwise we set the current as the right boundary of the search interval and proceed with a smaller value of . The whole procedure is summarized in Algorithm 1.

l ∞ -Setting
For comparison with the SDP-based approach by Raghunathan et al. (2018b) we show how we apply our method to compute bounds on the distance to the closest adversarial measured using the l ∞ -norm. A straight forward way would be to modify the objective function accordingly. By introducing a new variable m representing ‖x 0 −x 0 ‖ 2 ∞ = max i (x 0 i −x 0 i ) 2 and n 0 new quadratic constraints we get the following versions of QPRel: Note that the quadratic constraints do not harm the complexity since they describe a convex cone and can be handled by the QP-solvers. While this formulation is of a similar structure as the QPRel (quadratic objective as well as linear and quadratic constraints), the Hessian of the objective function is not positive semi-definite for any value of . Since c(x, ) is the only source of quadratic terms now (squared distance to the anchor point is now replaced by m), the new M is of the same form as in (6), but with 0 = 0 . To see that we cannot affect the convexity of the objective function by the parameter anymore consider vector x with an arbitrary x 0 ∈ ℝ n 0 as well as x 1 = W 1 x 0 for some 0 < < 1 and x l = 0 for l > 1 . Then meaning that M cannot be positive semi-definite.
To overcome this issue, we utilize the new quadratic constraints. We return back to a convex QP by considering the following problem with a positive .
Clearly, for 0 < ≤ n −1 0 the solution of this problem is a finite lower bound on DtDB with the l ∞ -norm. On the other side we are back in the setting of Theorem 1 with 0 = allowing us to use the same framework as before. In Sect. 4 we obtain the results in the l ∞ -setting by solving this problem with = (2n 0 ) −1 .

Experiments
For each considered sample we apply the procedure described in Sect. 3.3, Algorithm 1 including tightening of the relaxation by introducing additional linear constraints (10) such that the feature values lie in [0, 1] interval. For each of the datasets we use the correctly classified samples from 120 train points to evaluate the verification approaches. For classification we take ReLU networks consisting of dense and convolutional linear layers. The architectures we used for the image datasets are named D2, D4, D8 (dense networks containing 2, 4 and 8 hidden layers consisting of 50 neurons each with an exception for the last 4 layers in D8 that have 20 neurons each) and C. We use similar structures of the networks as Wong and Kolter (2018) to enable easier comparison. The latter consists of two convolutional layers with 4 × 4 windows, a stride of 2 as well as 16 and 32 output channels correspondingly, followed by two dense layers with input/output dimensions of 1568/100 and finally 100/10. For each architecture we use normally trained classifiers as well as robustly trained ones (indicated by suffix R, e.g. CR) using the method by Wong and Kolter (2018) with = 1.58 in l 2 -setting and = 0.1 in l ∞ -setting. For the tabular datasets we use a dense network with two hidden layers with 10 neurons called D2 and different values in l 2 -setting: 0.113 for IRIS and 0.195 for WINE (and the same = 0.1 in l ∞ -setting). The weights as well as the project code are available at github. com/ Aleks ei-Kuvsh inov/ QPRel. In Table 1 we show the clean accuracy of the trained networks on the corresponding test sets.
Competitors We compare our approach QPRel with the following verification methods: ConvAdv by Wong and Kolter (2018) based on the LP relaxation of ReLU constraints (we use its implementation supporting the l 2 -norm by Croce et al. 2019), CROWN by Zhang et al. (2018) which is a layerwise bound propagation technique including performance boosting quadratic approximations and warm start (for dense networks only since its implementation did not support convolutional layers), and SDPRel by Raghunathan et al. (2018b) based on a SDP relaxation solved by MOSEK.
Metrics The results on MNIST and Fashion-MNIST for the l 2 -and l ∞ setting are shown in Tables 3 and 4 correspondingly. We show the results on the tabular data in Table 2. We run the methods for each of the considered samples and report the following metrics. (1) AvgBound the average value of the bounds obtained from QPRel and the corresponding competitor (the best value marked bold if at least 5% larger than the worst one). To asses the impact of introducing additional linear constraints using a bound propagation method as described in Sect. 3.3 we report the lower bounds obtained by solving QPRel without constraints (10) and (11) in the last column AvgBound (no BndProp) in Tables 3  and 4. (2) MedRelDiff to QPRel: the median of the relative difference between the bounds (e.g. QPRel minus CROWN and then divided by CROWN). Positive values for the lower bounds mean our bounds are better in average over the samples. (3) to hit 50% LB-verified: the number of samples with an adversarial-free radius of is monotonically decreasing in . Therefore, to assess the performance of a verification procedure like QPRel or CROWN we report the smallest such that exactly 50% of the samples are successfully verified. The larger this value, the better (the largest values marked bold). l 2 -setting, state-of-the-art bounds For all considered architectures the lower bounds computed by QPRel are tighter in comparison to the competitors in average (see Table 3,  AvgBound and MedRelDiff) and for the networks with a smaller number of hidden layers even for most individual images. Naturally, this results in larger values of to hit 50% LB-verified as well. It seems that the competitors tend to underestimate robustness of the considered networks, especially if it was not trained robustly. For the normally trained convolutional network C on MNIST we were able to improve the competitor's lower bounds by a factor of 2 in average. In contrast to other verification procedure that can not easily verify networks that were not robustly trained, our method is applicable to normally trained networks as well. While this improvement of the verifiable radius comes at higher computational cost (QPRel is about one order of magnitude slower than the LP-competitors) due to a fundamental difference in complexity of the LP-and QP-tasks, the average runtime per sample is still only seconds or less for the dense and multiple minutes for the convolutional networks. We present a detailed runtime comparison in "Appendix A".
In the last column of Table 3, we report the lower bound obtained when solving QPRel without introducing additional constraints as described in Sect. 3.3. We observe that the relaxation becomes less tight for networks with more layers and if it was trained robustly. We suppose that when the number of layers L becomes larger the binary search between and ̄ (see Theorem 1 and the discussion afterwards) in a higher dimensional space results in a point far from the optimal Lagrange multipliers. Especially the last L−1 and ̄L −1 defined in (9) become small such that the gap between x L−1 and W L−1 x L−2 + b L−1 has only a very limited effect on the objective function of QPRel. That results in an undesired optimal solution of QPRel with a large propagation gap. At that point, by introducing additional linear constraints [especially (11)] we prohibit this behavior by bounding the propagation gap for the set of feasible points. Overall, incorporating additional linear constraints by using bounds on ReLU's input has proven to significantly improve our relaxation and the resulting lower bounds.
l ∞ -setting, comparison with SDP-relaxations In order to compare our method with the work done by Raghunathan et al. (2018b) we generalize QPRel to the l ∞ -setting as described in Sect. 3.4. Note, that the resulting relaxation is looser than the initial QPRel for the l 2 -setting since we bound the l ∞ -distance from below to make the problem quadratic and convex. To compute the largest such that the SDP verification succeeds we perform a binary search on the [0, 1] interval. Since this approach takes longer to run we test it only on the networks D2 and D2R trained with = 0.1 (MNIST data).
In l ∞ setting our bounds are about 3 times smaller than the ones of SDPRel (see Table 4, MedRelDiff to QPRel)-though computed three orders of magnitude faster (see "Appendix A"). This shows that the QP relaxation is less suited than the competitors for obtaining tight bounds in l ∞ -setting as already indicated by the arguments above due to the nature of the quadratic relaxation, but trades this off by much better efficiency compared to SDPRel.

Conclusion and future work
We presented a novel approach to solve the problem of approximating the minimal adversarial perturbations for ReLU networks based on a convex QP relaxation of DtDB. We show that the lower bounds computed with QPRel allow certification of larger neighborhoods. Since convexity of the underlying QP determines computational efficiency of our approach we derive the necessary and sufficient conditions on the Lagrangian multipliers. The obtained lower bounds in the l 2 -setting show state-of-the-art results allowing to certify larger radia around the data samples as adversarial free. With our contribution we make a step towards robustness verification of deep ReLUbased classifiers. While the proposed theoretical framework is applicable to any linear transformations including dense, convolutional and average pooling layers as well as skip connections, it requires a different analysis when a non-ReLU activation functions are used (except leaky ReLU). To be able to apply the approach on a wider class of networks it should be generalized to popular architectures beyond ReLU activations. Last but not least, excellent results that our method demonstrated for the verification task indicate an intriguing research direction toward robust training. Based on our certificates the next step towards robust training would be an approach that uses the solution of QPRel to make an update step resulting in larger certified neighborhood for the correctly classified samples. As our approach does not require a predefined , that additional regularization acts individually for each sample depending on its current robust neighborhood.

A Runtime
Tables 5, 6, 7 and 8 (see "Appendix C") show the average runtime and its standard deviation for the considered experiments. During the binary search procedure we apply for SDPRel we always make 10 bisection steps. Furthermore, we speed up this approach by  Table 9 for details) such that the optimization procedure terminates earlier (approximately after a half of the usual number of iterations). We can still rely on the obtained results since we are not interested in the exact value of the SDP objective, but only whether it is positive or negative which was observed to be determined far sooner during the solution process than when the solver would reach a true optimum. All tasks necessary for the computation of bounds on DtDB for one sample are run on four CPUs (including the solution of QPs and SDPs with Gurobi and MOSEK respectively). Column Runtime-LB (s), sequential QPRel shows the runtime of the whole bound improvement procedure as described in Sect. 3.3. From the comparison of QPRel and the LP-based approaches we see the clear advantage of the latter since they do not involve any optimization task. However, especially in the l 2 -setting this advantage comes in cost of the verification properties as discussed in Sect. 4. On the other hand, SDPRel with a binary search provides better bounds, but is about three orders of magnitude slower than QPRel.  ( 2 ).
Proof Assume x adv is the optimal solution of DtDB. Then it is an admissible point of QPRel as well and c(x adv , ) = 0 since x l adv = ReLU(W l x l−1 adv + b l ) for l = 1, … , L − 1 . Since x qp is optimal for QPRel and x adv is an admissible point we get proving the first claim. The second one follows from the fact that c(x, ) for a given x is a linear function of : where each c(x, e l ) = x l T x l − W l x l−1 + b l is non-negative for admissible x because of the non-negativity constraints (4). Therefore the claim follows immediately from the assumption that 1 l ≤ 2 l for all l ◻ Theorem 3 Let W 1 , … , W L−1 be the weights of a pre-trained neural network and ‖W‖ the spectral norm of an arbitrary matrix. Then the following two conditions for provide correspondingly a sufficient and a necessary criterion for the matrix M to be positive semi-definite.
(7) (suf. condition) 1 ≤ 2 0 ‖W 1 ‖ 2 and l ≤ l−1 ‖W l ‖ 2 for l ≥ 2, Note, that for such that M is not positive semi-definite there exists x such that x T M x < 0 . Therefore, the inner optimization task is unbounded in this case. That means we can introduce the desired constraint on and solve the convex QP explicitly, obtaining the following equivalent formulation of the dual. By splitting the maximization task in two we obtain where the inner task is a convex QP. Therefore, it can be transformed to its dual without introducing the duality gap. Following the steps we have done backwards (now with a fixed ) we obtain exactly the QPRel problem as the dual of the inner optimization problem. That concludes the proof since we arrive at the formulation from the claim. ◻ Lemma 2 Denote * as the optimal defined in Theorem 2, ̂ as we use for verification, ̄ as defined in (9), c(x, ) as the propagation gap defined in (5) and x qp as the solution of QPRel(̂) . Then we get the following upper bound on the possible improvement of QPRel's objective function for a value that is different from our ̂: Proof The proof is done by sorting the quadratic, linear and constant terms in the objective function: From the quadratic term we can identify the blocks of M as claimed. ◻

C Tables
All tables can be found on pp. 24-27.
Funding Open Access funding enabled and organized by Projekt DEAL. This research was supported by the BMW AG.

Data availability
The authors provide references to all data and material used in this work.
Code availability Custom code is provided including the installation instructions. It requires installation of the gurobi solver, academic licenses are available at gurobi.com.
Declarations permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.