Abstract
Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation by propagating a symbolic abstraction of reachable values at each layer. This process is repeated from scratch independently for each input (e.g., image) and perturbation (e.g., rotation), leading to an expensive overall proof effort when handling an entire dataset. In this work, we introduce a new method for reducing this verification cost without losing precision based on a key insight that abstractions obtained at intermediate layers for different inputs and perturbations can overlap or contain each other. Leveraging our insight, we introduce the general concept of shared certificates, enabling proof effort reuse across multiple inputs to reduce overall verification costs. We perform an extensive experimental evaluation to demonstrate the effectiveness of shared certificates in reducing the verification cost on a range of datasets and attack specifications on image classifiers including the popular patch and geometric perturbations. We release our implementation at https://github.com/ethsri/proofsharing.
Keywords
 Neural Network Verification
 Local Verification
 Adversarial Robustness
M. Fischer and C. Sprecher—Equal contribution.
C. Sprecher—Work performed while at ETH Zurich.
Download conference paper PDF
1 Introduction
The success of neural networks across a wide range of application domains [21, 30] has led to their widespread application and study. Despite this success, neural networks remain vulnerable to adversarial attacks [8, 23] which raises concerns over their trustworthiness in safetycritical settings such as autonomous driving and medical devices. To overcome this barrier, formal verification of neural networks has been proposed as a key technology in the literature [39]. As a result, recent years have witnessed a growing interest in verifying critical safety properties of neural networks (e.g., fairness, robustness) [14, 17, 18, 31, 32, 40, 42] specified using pre and post conditions over network inputs and outputs respectively. Conceptually, existing verifiers propagate sets of inputs in the precondition captured in symbolic form (e.g., convex sets) through the network, an expensive process that produces overapproximations of all possible values at intermediate layers. The final abstraction of the output can then be used to check postconditions. The key technical challenge all existing verifiers aim to address is speeding up and scaling the certification process, i.e., faster and more efficient propagation of symbolic shapes while reducing the overapproximation error.
This Work: Accelerating Certification via Proof Sharing. In this work, we propose a new, complementary method for accelerating neural network verification based on the key observation that instead of treating each certification attempt in isolation as existing verifiers do, we can reuse proof effort among multiple such attempts, thus obtaining significant overall speedups without losing precision. Figure 1 illustrates both, standard verification and the concept of proof sharing.
In standard verification an input region \(\mathcal {I}_1({\boldsymbol{x}})\) (orange square) is propagated from left to right, obtaining intermediate shapes at each intermediate layer (here the goal is to verify all points in the input region are classified as “cat” by the neural network N). We observe that the abstraction obtained for a new region \(\mathcal {I}_2({\boldsymbol{x}})\) (e.g., blue shapes) can be contained inside existing abstractions from \(\mathcal {I}_1({\boldsymbol{x}})\), an effect we term proof subsumption. This effect can be observed both between abstractions obtained from different specifications (e.g., \(\ell _{\infty }\) and adversarial patches) for the same data point and between proofs for the same property but different, yet semantically similar inputs. Building on this observation, we introduce the notion of proof sharing via templates. Proof sharing works in two steps: first, we leverage abstractions from existing proofs in order to create templates, and second, we augment the verifier with these templates, stopping the expensive propagation at an intermediate layer as soon as the newly generated abstraction is included inside an existing template. Key technical ingredients to the effectiveness of our approach are fast template generation and inclusion checking techniques. We experimentally demonstrate that proof sharing can achieve significant speedups in challenging scenarios including proving robustness to adversarial patches [10] and geometric perturbations [3] across different neural network architectures.
Main Contributions. Our key contributions are:

An introduction and formalization of the concept of proof sharing in neural network verification: the idea that some proofs capture others (Sect. 3).

A general framework leveraging the above concept, enabling proof effort reuse via proof templates (Sect. 4).

A thorough experimental evaluation involving verification of neural network robustness against challenging adversarial patch and geometric perturbations, demonstrating that our methods can achieve proof match rates of up \(95 \%\) as well as provide nontrivial endtoend certification speedups (Sect. 5).
2 Background
Here we formally introduce the necessary background for proof sharing.
Neural Network. A neural network N is a function \(N: \mathbb {R}^{d_\text {in}} \rightarrow \mathbb {R}^{d_\text {out}}\), commonly built from individual layers \(N = N_L \circ N_{L1} \circ \cdots \circ N_1\). Throughout this text, we consider feedforward neural networks, where each layer \(N_i({\boldsymbol{x}}) = \max ({\boldsymbol{A}}{\boldsymbol{x}}+ {\boldsymbol{b}}, 0)\) consists of an affine transformation (\({\boldsymbol{A}}{\boldsymbol{x}}+ {\boldsymbol{b}}\)) as well as a rectified linear unit (ReLU), that applies the \(\max \) with 0 elementwise. A neural network, classifying inputs into c classes, outputs \(d_\text {out} := c\) scores, one for each class, and assigns the class with the highest score as the predicted one. While, as is common in the neural network verification literature, we use image classification as a proxy task, many other applications work analogously. Our approach also naturally extends to other types of neural networks, if verifiers exist for these architectures. We discuss the challenges and limitations of such generalizations in Sect. 4.5. In the following, for \(k < L\), we let \(N_{1:k}\) denote the application of the first k layers and \(N_{k+1:L}\) denote the last \(Lk\) layers respectively.
(Local) Neural Network Verification. Given a set of inputs and a postcondition \(\psi \), the goal of neural network verification is to prove that \(\psi \) holds over the output of the neural network corresponding to the given set of inputs. In this work, we focus on local verification, proving that \(\psi \) holds for the network output for a given region \(\mathcal {I}({\boldsymbol{x}}) \subseteq \mathbb {R}^{d_\text {in}}\) formed around the input \({\boldsymbol{x}}\). Formally, we state this as:
Problem 1 (Local neural network verification)
For a region \(\mathcal {I}({\boldsymbol{x}}) \subseteq \mathbb {R}^{d_\text {in}}\), neural network N, and postcondition \(\psi \), verify that \(\forall {\boldsymbol{z}}\in \mathcal {I}({\boldsymbol{x}}). \; N({\boldsymbol{z}}) \models \psi \). We write \(\mathcal {I}({\boldsymbol{x}}) \models \psi \) if \(\forall {\boldsymbol{z}}\in \mathcal {I}({\boldsymbol{x}}). \; N({\boldsymbol{z}}) \models \psi \).
Here, we restrict ourselves to verifiers based on abstract interpretation [11, 14] as they achieve stateoftheart precision and scalability [31, 32]. Further, many other popular verifiers [38, 42] can be formulated using abstract interpretation. These verifiers propagate \(\mathcal {I}({\boldsymbol{x}})\) symbolically through the network N layerbylayer using abstract transformers, which overapproximate the effect of applying the transformations defined in the different layers on symbolic shapes. The propagation yields an abstraction of the exact shape at each layer. The verifiers finally check if the abstracted output implies \(\psi \). This is showcased in Fig. 1, where the input regions \(\mathcal {I}_{1}({\boldsymbol{x}})\) and \(\mathcal {I}_{2}({\boldsymbol{x}})\) are propagated layerbylayer through N.
For a verifier V, we let \(V(\mathcal {I}({\boldsymbol{x}}), N)\) denote the abstraction obtained after the propagation of \(\mathcal {I}({\boldsymbol{x}})\) through the network N. We declutter notation by overloading N and writing \(N(\mathcal {I}({\boldsymbol{x}}))\) for the same if V is clear from context, i.e., \(V(\mathcal {I}({\boldsymbol{x}}), N) = N(\mathcal {I}({\boldsymbol{x}}))\).
We consider robustness verification, where the goal is to prove that the network classification does not change within an input region. A common input region is the \(\ell _{\infty }\)bounded additive noise, defined as \(\mathcal {I}_{\epsilon }({\boldsymbol{x}}) := \{{\boldsymbol{z}}\mid \Vert {\boldsymbol{x}} {\boldsymbol{z}}\Vert _{\infty } \le \epsilon \}\). Here, \(\epsilon \) defines the size of the maximal perturbation to \({\boldsymbol{x}}\). The postcondition \(\psi \) denotes classification to the same class as \({\boldsymbol{x}}\). Throughout this paper, we consider different instantiations for \(\mathcal {I}({\boldsymbol{x}})\) but assume that \(\psi \) denotes classification invariance (although other choices would work analogously). Due to this, we refer to \(\mathcal {I}({\boldsymbol{x}})\) as input region and specification interchangeably. For example, in Fig. 1, the goal is to verify that all points contained in \(N(\mathcal {I}_{1}({\boldsymbol{x}}))\) are classified as “cat”.
3 Proof Sharing with Templates
Before introducing our framework for proof sharing, we further expand the motivation example discussed in Fig. 1.
3.1 Motivation: Proof Subsumption
As stated earlier, we empirically observed that for many input regions \(\mathcal {I}_i({\boldsymbol{x}})\) and \(\mathcal {I}_j({\boldsymbol{x}})\), the abstraction corresponding to one region at some intermediate layer k contains that of another. Formally:
Definition 1 (Proof Subsumption)
For specifications \(\mathcal {I}_i({\boldsymbol{x}}), \mathcal {I}_j({\boldsymbol{x}})\), we say that the proof of \(\mathcal {I}_i({\boldsymbol{x}})\) subsumes that of \(\mathcal {I}_j({\boldsymbol{x}})\) if at some layer k, \(N_{1:k}(\mathcal {I}_j({\boldsymbol{x}})) \subseteq N_{1:k}(\mathcal {I}_i({\boldsymbol{x}}))\), which we denote as \(\mathcal {I}_j({\boldsymbol{x}}) \subseteq _{N,k} \mathcal {I}_i({\boldsymbol{x}})\).
While not formally required, particularly interesting are cases where proof subsumption occurs despite \(\mathcal {I}_i({\boldsymbol{x}}) \not \subseteq \mathcal {I}_j({\boldsymbol{x}})\). This form of proof subsumption is showcased in Fig. 1, where \(\mathcal {I}_{1}({\boldsymbol{x}})\) and \(\mathcal {I}_{2}({\boldsymbol{x}})\) have only a small overlap, yet \(\mathcal {I}_2({\boldsymbol{x}}) \subseteq _{N,k} \mathcal {I}_1({\boldsymbol{x}})\). For another example, consider a neural network N trained as a handwritten digit classifier for the MNIST dataset [22] (example shown in Fig. 2) and the following two specifications:

\(\ell _\infty \)bounded perturbations: all the pixels in an input image can arbitrarily be changed independently by a small amount \(\mathcal {I}_{\epsilon }({\boldsymbol{x}}) := \{{\boldsymbol{z}}\mid \Vert {\boldsymbol{x}} {\boldsymbol{z}}\Vert _{\infty } \le \epsilon \}\),

adversarial patches [10]. A \(p \times p\) patch inside which the pixel intensity can vary arbitrarily is placed on an image at coordinates (i, j), for which we write \(\mathcal {I}_{p \times p}^{i,j}\). We showcase a patch in Fig. 2 and formally define them in Sect. 4.3.
Clearly \(\mathcal {I}_{p \times p}^{i,j}({\boldsymbol{x}}) \not \subseteq \mathcal {I}_{\epsilon }({\boldsymbol{x}})\) (unless \({\epsilon }=1\)). In Table 1, we show that for a classifier (5 layers with 100 neurons each) we indeed observe proof subsumption. We report the accuracy, i.e., the rate of correct predictions on the unperturbed test data, as well as the certified accuracy, i.e., the rate of samples \({\boldsymbol{x}}\) for which the prediction is correct and \(\mathcal {I}({\boldsymbol{x}}) \models \psi \) is verified, for \(\mathcal {I}_{{\epsilon }}\) with \({\epsilon }=0.1\) and 0.2 over the whole test set. We also show the percentage of \(\mathcal {I}_{2 \times 2}^{i,j}({\boldsymbol{x}})\) contained in \(\mathcal {I}_{{\epsilon }}({\boldsymbol{x}})\) at layer k. To this end, we pick 1000 random \({\boldsymbol{x}}\) for which \(\mathcal {I}_{{\epsilon }}({\boldsymbol{x}})\) is verifiable and sample 2 (i, j) pairs each. We utilize a Box domain verifier and a robustly trained network [24]. Figure 3 shows a patch specification \(\mathcal {I}_{2 \times 2}^{i,j}({\boldsymbol{x}})\) (in orange) contained in the \(\ell _{\infty }\) specification \(\mathcal {I}_\epsilon \) (in blue) projected to 2 dimensions via PCA.
Reasons for Proof Subsumption. In Table 1, we observe that the rate of proof subsumption increases with larger \({\epsilon }\) and k. These observations give an intuition as to why we observe proof subsumption. First, as input regions pass through the neural network, in each layer the abstractions become more imprecise. While this fundamentally limits verification, it makes the subsumption of abstractions more probable. This effect increases, when increasing \({\epsilon }\) for \(\mathcal {I}_{\epsilon }\). Second, and more fundamentally, while passing through the layers of a neural network, we observed that semantically similar yet distinct image inputs, e.g., two similarlooking handwritten digits, have activation vectors that grow closer in \(\ell _{2}\) norm as they pass through the layers of the neural network [21, 34]. This effect is a consequence of the neural network distilling lowlevel information (e.g., individual pixel values) into highlevel concepts (e.g., the classes of digits). As specifications (and their proofs) correspond to sets of concrete inputs, a similar effect may apply. We conjecture that these two effects drive the observed proof subsumption.
3.2 Proof Sharing with Templates
Leveraging this insight, we introduce the idea of proof sharing via templates, showcased in Fig. 4. We use an abstraction obtained from a robustness proof \(N_{1:k}(\mathcal {I}_{1}({\boldsymbol{x}}))\) at layer k to create a template T. After ensuring that T is verifiable, it can be used to shortcut the verification of other regions, e.g., of \(\mathcal {I}_{2}({\boldsymbol{x}}), \dots , \mathcal {I}_{5}({\boldsymbol{x}})\). Formally we decompose proof sharing into two subproblems: (i) the generation of proof templates and (ii) the matching of abstractions corresponding to other properties to these templates. For simplicity, here we only consider templates at a single layer k of the neural network and we show an extension to multiple layers in Sect. 4.3.
Our goal is to construct a template T at layer k that implies the postcondition and captures abstractions at layer k obtained from propagating several \(\mathcal {I}_i({\boldsymbol{x}})\). As it is challenging to find a single T that captures abstractions corresponding to many input regions, yet remains verifiable, we allow a set of templates \(\mathcal {T}\). We state this formally as:
Problem 2 (Template Generation)
For a given neural network N, input \({\boldsymbol{x}}\) and set of specifications \(\mathcal {I}_1, \dots , \mathcal {I}_r\), layer k and a postcondition \(\psi \), find a set of templates \(\mathcal {T}\) with \(\mathcal {T} \le m\) such that:
Intuitively, Eq. (1) aims to find a set \(\mathcal {T}\) of templates T at layer k, such that the maximal amount (via the sum) of specifications \(\mathcal {I}_1, \dots , \mathcal {I}_r\) is contained in at least one template T (via the disjunction) while ensuring that the individual T are still verifiable (via the constraint on the second line). As neural network verification required by the constraints of Eq. (1), is NPcomplete [17], computing an exact solution to Problem 2 is computationally infeasible. Therefore, we compute an approximate solution to Eq. (1). In general, Problem 2 does not necessarily require that the templates T are created from previous proofs. However, building on proof subsumption, as discussed in Sect. 3.1, in Sect. 4 we will infer the templates from previously obtained abstractions.
To leverage proof sharing once the templates \(\mathcal {T}\) are obtained, we need to be able to match an abstraction \(S = N_{1:k}(\mathcal {I}({\boldsymbol{x}}))\) verified using proof transfer to a template in \(\mathcal {T}\):
Problem 3 (Template Matching)
Given a set of templates \(\mathcal {T}\) at layer k of a neural network N, and a new input region \(\mathcal {I}({\boldsymbol{x}})\), determine whether there exists a \(T \in \mathcal {T}\) such that \(S \subseteq T\), where \(S = N_{1:k}(\mathcal {I}({\boldsymbol{x}}))\).
Together, Problems 2 and 3 outline a general framework for proof sharing, permitting many instantiations. We note that Problems 2 and 3 present an inherent precision vs. speed tradeoff: Problem 3 can be solved most efficiently for small values of \(m = \mathcal {T}\) and simpler representations of T (allowing faster checking of \(S \subseteq T\)) at the cost of lower proof matching rates. Alternatively, Eq. (1) can be maximized by large m and T represented by complex abstractions, thus attaining high precision but expensive template generation and matching.
Beyond Proof Sharing on the Same Input. In this section, we focused on proof sharing for different specifications of the same input \({\boldsymbol{x}}\). However, we observed that proof sharing is even possible between specifications defined on different inputs \({\boldsymbol{x}}\) and \({\boldsymbol{x}}'\). To facilitate the use of templates in this setting, Eq. (1) in Problem 2 can be adapted to consider an input distribution.
4 Efficient Verification via Proof Sharing
We now consider an instantiation of proof sharing where we are given an input \({\boldsymbol{x}}\) and properties \(\mathcal {I}_{1}, \dots ,\mathcal {I}_{r}\) to verify. Our general approach, based on Problems 2 and 3, is shown in Algorithm 1. In this section, we first discuss Algorithm 1 in general. We then describe the possible choices of abstract domains and their implications on the algorithm, followed by a discussion on template generation for two different specific problems. Finally, we conclude the section with a discussion on the conditions for effective proof sharing verification.
In Algorithm 1, we first create the set of templates \(\mathcal {T}\) (Line 1, discussed shortly) and subsequently verify \(\mathcal {I}_{1}, \dots ,\mathcal {I}_{r}\) using \(\mathcal {T}\). Here, we consider two, potentially identical, verifiers \(V_T\) and \(V_S\), where \(V_T\) is used to create the templates \(\mathcal {T}\) and \(V_S\) is used to propagate input regions up to the template layer k. For each \(\mathcal {I}_{i}\) we propagate it up to layer k (Line 4) to obtain \(S = N_{1:k}(\mathcal {I}_{i}({\boldsymbol{x}}))\) and check if we can match it to a template \(T_j \in \mathcal {T}\) (Line 6) using an inclusion check. If a match is found, then we conclude that \(N(\mathcal {I}_{i}({\boldsymbol{x}})) \models \psi \) and set the verification output \(v_i\) to True. If this is not the case (Line 11) we verify \(N(\mathcal {I}_{i}({\boldsymbol{x}})) \models \psi \) directly by checking \(V_S(S, N_{k+1:L}) \models \psi \). If the template generation fails, we revert to verifying \(\mathcal {I}_i\) by applying \(V_S\) in the usual way (omitted in Algorithm 1).
Soundness. As long as the templates T are sound, this procedure is sound, i.e. Algorithm 1 only returns \(v_i = \text {True}\) if \(\forall {\boldsymbol{z}}\in \mathcal {I}_{i}({\boldsymbol{x}}). \; N(z) \models \psi \) holds. Formally:
Theorem 1
Algorithm 1 is sound if \(\forall \; T \in \mathcal {T}\!,\, z \in T.\; N_{k+1:L}(z) \models \psi \) and \(V_S\) is sound.
This holds by the construction of the algorithm:
Proof
For a given \({\boldsymbol{x}}\) and \(\mathcal {I}_i\), Algorithm 1 only claims \(v_i = \text {True}\) if either the check in (i) Line 6 or (ii) Line 11 succeeds. Since \(V_S\) is sound, we know that \(\forall {\boldsymbol{z}}\in \mathcal {I}_{i}({\boldsymbol{x}}). \; N_{1:k}(z) \in S\). Therefore in case (i) by our requirement on T as well as \(S \subseteq T\) it follows that \(\forall {\boldsymbol{z}}\in \mathcal {I}_{i}({\boldsymbol{x}}). \; N(z) \models \psi \). In case (ii) we execute Line 12 and the same property holds due to the soundness of \(V_S\).
Importantly, Theorem 1 shows that the generation process of \(\mathcal {T}\) does not affect the overall soundness as long as the set of templates \(\mathcal {T}\) fulfills the condition in Theorem 1. In particular, that means that when solving Problem 2, it suffices to show the side condition \((\forall \; T \in \mathcal {T}. \, N_{k+1:L}(T) \models \psi )\) holds, while heuristically approximating the actual optimization criteria. We let \(V_T\) denote the verifier used to ensure this property in gen_templates.
Precision. We say a verifier \(V_1\) is more precise than another verifier \(V_2\) on N if out of a set of specifications it can verify some that \(V_2\) can not.
Theorem 2
If \(V_S(V_S(\mathcal {I}_i({\boldsymbol{x}}), N_{1:k}), N_{k+1:L}) = V_S(\mathcal {I}_i({\boldsymbol{x}}), N)\), then Algorithm 1 is at least as precise as \(V_S\).
Proof
Since, even if the inclusion check in Line 6 fails, due to Line 12 we output \(v_i = V_S(V_S(\mathcal {I}_i({\boldsymbol{x}}), N_{1:k}), N_{k+1:L}) \models \psi \) (Line 12), which by our requirement equals \(v_i = V_S(\mathcal {I}_i({\boldsymbol{x}}), N) \models \psi \). Therefore we have at least the precision of \(V_S\).
The required property holds for any verifier \(V_S\) for which the abstractions of all network layers depends only on the abstractions from previous layers and is fulfilled for all verifiers considered in this paper. For verifiers \(V_S\) that do not fulfill the required property, potential losses in precision can be remedied (at the cost of runtime) by using \(V_S(\mathcal {I}_i({\boldsymbol{x}}), N_{1:L})\) in Line 12. Interestingly, it is even possible to increase the precision of Algorithm 1 over \(V_S\) by creating templates T that are verified with a more precise verifier \(V_T\). However, in this discussion, we restrict ourselves to speed gains. We believe that obtaining precision gains requires instantiating our framework with a significantly different approach than that taken for improving speed which is the main focus of our work. We leave this as an interesting item for future work.
RunTime. Here, we aim to characterize the runtime of Algorithm 1 as well as its speedup over conventional verification. For an input \({\boldsymbol{x}}\), (keeping the other parameters fixed), the expected run time is
where \(t_{\mathcal {T}}\) is the expected time required to generate the templates at Line 1, r is the number of specifications to be verified, \(t_{S}\) is the expected time to compute S (Line 4), \(t_{\subseteq }\) is the time to check \(S \subseteq T\) for \(T \in \mathcal {T}\) until a match is found (Line 5 to Line 10), \(\rho \in [0, 1]\) is the rate of specifications where a template is found and \(t_{\psi }\) is the time required to check \(\psi \) on the network output corresponding to S (Line 12). This time is minimized if the individual expected run times \(t_{\mathcal {T}}, t_S, t_{\psi }\) are minimal and \(\rho \) is large (i.e., close to 1). Unfortunately, computing the template match rate \(\rho \) analytically is challenging and requires global reasoning over the neural network for all valid inputs, which are not clearly defined. However, our empirical analysis (in Sect. 5) shows that \(\rho \) is higher when templates are created at later layers (as in Sect. 3.1).
To determine the speedup compared to a baseline standard verifier, we make the simplifying assumption that there is a single verifier \(V = V_S = V_T\) that has expected runtime \(\nu \) for each layer. Thus, the expected runtime for the conventional verifier is \(t_{BL} = rL\nu \). We have \(t_{\mathcal {T}} = \lambda mL\nu \), \(t_S = k\nu \), \(t_{\psi } = (L  k)\nu \), \(t_{\subseteq } = \eta m\) and ultimately \(t_{PS} = (m+r(1\rho ))L\nu + r \rho k\nu + r \eta m\) for constants \(\lambda \in \mathbb {R}_{>0}\), which indicates the overhead in generating one template over just verifying it, and \(\eta \in \mathbb {R}_{>0}\) which denotes the time required to perform an inclusion check for one template. As this phrasing shows, Algorithm 1 has the same asymptotic runtime as the base verifier V. Further, this formulation allows us to write our expected speedup as \(\tfrac{t_{BL}}{t_{PS}} = \tfrac{r}{\lambda m + \eta rm/L\mu + r \rho k/L + r(1\rho )}\). This speedup is maximized when k is small compared to L, i.e., templates are placed early in the neural network, the matching rate \(\rho \) is close to 1, and \(m, \lambda , \eta \) are small, i.e., generation and matching are fast. Unfortunately, these requirements are at odds with each other: as we show in Sect. 5, higher m leads to higher matching rate \(\rho \) and \(\rho \) is naturally higher for templates later in the neural network (higher k). Thus high speedups require careful hyperparameter choices.
To showcase how we can achieve good templates as well as fast matching, we next discuss the choice of the abstract domain to be used in the propagation and the representation of the templates. Then we discuss the template generation procedure and instantiate it for the verification of robustness to adversarial patches and geometric perturbations.
4.1 Choice of Abstract Domain
To solve Problems 2 and 3 in a way that minimizes the expected runtime and maximizes the overall precision, the choice of abstract domain is crucial. Here we briefly review common choices of abstract domains for neural network verification and how they are suited to our problem. Geometrically these domains can be thought of as a convex abstraction of the set of vectors representing reachable values at each layer of the neural network. We say that an abstraction \(a_1\) is more precise than another abstraction \(a_2\), if and only if \(a_1 \subseteq a_2\), i.e., all points in \(a_1\) occur in \(a_2\). Similarly, we say that a domain is more precise than another if it can express all abstractions in the other domain.
The Box (or Interval) domain [14, 16, 24] abstracts sets in d dimensions as \(B = \{{\boldsymbol{a}}+ \mathrm {diag}({\boldsymbol{d}}) {\boldsymbol{e}}\mid {\boldsymbol{e}}\in [1, 1]^{d} \}\) with center \({\boldsymbol{a}}\in \mathbb {R}^{d}\) and width \({\boldsymbol{d}}\in \mathbb {R}_{\ge 0}^{d}\). The Zonotope domain [14, 15, 24, 31, 40] uses relaxations Z of the form
parametrized with \({\boldsymbol{a}}\in \mathbb {R}^{d}\) and \({\boldsymbol{A}}\in \mathbb {R}^{d \times q}\).
A third common choice are (restricted) convex Polyhedra P [12, 32, 42]. Here, we consider P to be in the DeepPoly (DP) domain [32, 42]. Generally, Boxes are less precise, i.e. certify fewer properties, than Zonotopes or Polyhedra.
For efficient proof sharing, we require a fast inclusion check \(S \subseteq T\), which is challenging in our context due to the high dimensionality d of the intermediate neural network layers. While we point the interested reader to [29] for a detailed discussion, we summarize the key results in Table 2. There, denotes feasibility, i.e. low polynomial runtime (usually 2d comparisons, sometimes with an additional matrix multiplication), denotes infeasibility, e.g. exponential run time. If T is a Box all checks are simple as it suffices to compute the outer bounding box of S and compare the 2d constraints. If T is a DP Polyhedra these checks require a linear program (LP) to be solved. While the size of this LP permits a low theoretical time complexity, in case S is a Box or DP Polyhedra, in practice, we consider calling an LP solver too expensive (denoted as ( )). For Zonotopes these checks are generally infeasible, as they require enumeration of the faces or corners, which is computationally expensive for large d and P. While Zonotopes can be encoded as Polyhedra (but not necessarily DP Polyhedra) and the same LP inclusion check as for P could be used, the resulting LP would require exponentially many variables due to the previously mentioned enumeration. However, by placing constraints on the matrix \({\boldsymbol{A}}\) in Eq. (3) these inclusion checks can be performed efficiently. The mapping of a Zonotope to such a restricted Zonotope is called order reduction via outerapproximation [19, 29].
In particular, for a Zonotope Z we consider the order reduction \(\alpha _{\text {Box}}\) to its outer bounding box (where \({\boldsymbol{A}}\) is diagonal) and note that other choices of \(\alpha \) are possible (e.g. the reduction to affine transformations of a hyperbox).
For a general Zonotope Z its outer bounding box \(Z' = \alpha _{\text {Box}}(Z)\) can be easily obtained. The center of \(Z'\) is \({\boldsymbol{a}}\), the center of Z. The width \({\boldsymbol{d}}\in \mathbb {R}_{\ge 0}^{d}\) is given as \(d_{i} = \sum _{j=1}^{q} A_{i,j}\). \(Z'\) is represented as either a Box or a Zonotope (with \({\boldsymbol{A}}= \mathrm {diag}({\boldsymbol{d}})\)). To check \(S \subseteq Z'\) for a general Zontope S it suffices to check \(\alpha _{\text {Box}}(S) \subseteq Z'\) which reduces to the simple inclusion check for boxes.
Based on the above discussion we will use the Zonotope domain to represent all abstractions, and use verifiers \(V_S = V_T\) that propagate these zonotopes using the stateoftheart DeepZ transformers [31]. To permit efficient inclusion checks we apply \(\alpha _\text {Box}\) on the resulting zonotopes to obtain the Box templates T, which we treat as a special case of Zonotopes.
4.2 Template Generation
We now discuss instantiations for gen_templates in Algorithm 1. Recall from Sect. 3.1 the idea of proof subsumption, i.e. that abstractions for some specification contain abstractions for other specifications. Building on this, we relax the Problem 2 in order to create m templates \(T_{j}\) from intermediate abstractions \(N_{1:k}(\hat{\mathcal {I}}_{i}({\boldsymbol{x}}))\) for some \(\hat{\mathcal {I}}_{1}, \dots , \hat{\mathcal {I}}_{m}\). Note that \(\hat{\mathcal {I}}_{j}\) are not necessarily directly related to the specifications \(\mathcal {I}_{1}, \dots ,\mathcal {I}_{r}\) that we want to verify. For a chosen layer k, input \({\boldsymbol{x}}\), number of templates m and verifiers \(V_S\) and \(V_T\) we optimize
As originally in Problem 2 (Eq. (1)) we aim to find a set of templates such that the intermediate shapes at layer k for most of the r specifications are covered by at least one template T. In contrast to Eq. (1), we tie \(T_j\) to the specifications \(\hat{\mathcal {I}}_j\). This alone does not make the problem easier to tackle. However, next, we will discuss how to generate applicationspecific parametric \(\hat{\mathcal {I}}_{j}\) and solve Eq. (4) by optimizing over their parameters, allowing us to solve template generation much more efficiently than in Eq. (1).
4.3 Robustness to Adversarial Patches
We now instantiate the above scheme in order to verify the robustness of image classifiers against adversarial patches [10]. Consider an attacker that is allowed to arbitrarily change any \(p \times p\) patch of the image, as showcased earlier in Fig. 2. For such a patch over pixel positions \(([i,i+p1] \times [j,j+p1])\), the corresponding perturbation is
where h and w denote the height and width of the input \({\boldsymbol{x}}\). Here \(\pi _{i,j}\) denotes the parts of the image affected by the patch, and \(\pi _{i,j}^C\) its complement, i.e., the unaffected part of the image. To prove robustness for an arbitrarily placed \(p \times p\) patch, however, one must consider the perturbation set \(\mathcal {I}_{p \times p}({\boldsymbol{x}}) := \cup _{i, j} \mathcal {I}_{p \times p}^{i,j}({\boldsymbol{x}})\).
To prove robustness for \(\mathcal {I}_{p \times p}\), existing approaches [10] separately verify \(\mathcal {I}_{p \times p}^{i,j}({\boldsymbol{x}})\) for all \(i \in \{1, \dots , hp+1\}, j \in \{1, \dots , wp+1\}\). For example, with \(p=2\) and a \(28 \times 28\) MNIST image, this approach requires 729 individual proofs. Because the different proofs for \(\mathcal {I}_{p \times p}\) share similarities, this is an ideal candidate for proof sharing. We utilize Algorithm 1 and check \(\wedge _{i} v_{i}\) at the end to speed up this process. For template generation, we solve Eq. (4) for m templates with an input perturbation \(\hat{\mathcal {I}}_{i}\) per template.
We empirically found that (recall Table 1) setting \(\hat{\mathcal {I}}_{i}\) to an \(\ell _{\infty }\) region \(\mathcal {I}_{{\epsilon }_i}\) to work particularly well to capture a majority of patch perturbations \(\mathcal {I}_{p \times p}^{i,j}\) at intermediate layers. Specifically, we found that setting \({\epsilon }_i\) to the maximally verifiable value for this input to work particularly well.
To further increase the number of specifications contained in a set of templates \(\mathcal {T}\), we use m template perturbations of the form
where \(\mu _{i}\) denotes a subset of pixels of the input image and \(\mu _i^C\) its complement and we maximize \({\epsilon }_i\) in a besteffort manner. In particular, we consider \(\mu _{1}, \dots , \mu _{m}\), such that they partition the set of pixels in the image (e.g., in Fig. 5).
As noted earlier, this generation procedure needs to be fast, yet obtain \(\mathcal {T}\) to which many abstractions match in order to obtain speedups. Thus, we consider small m, and fixed patterns \(\mu _{1}, \dots , \mu _{m}\). For each \(\hat{\mathcal {I}}_{i}\), we aim to find the largest \(\epsilon _{i}\) which can still be verified in order to maximize the number of matches. Note that for \(m=1\), this is equivalent to the \(\ell _{\infty }\) input perturbation \(\mathcal {I}_{\epsilon }\) with the maximally verifiable \(\epsilon \) for the given image.
Concretely, we can perform binary search over \(\epsilon _{i}\) in order find a large \({\epsilon }_i\), still satisfying \(N_{k+1:L}(\alpha _{\text {Box}}(N_{1:k}(\hat{\mathcal {I}}_{i}))) \models \psi \). Verification with our chosen DeepZ Zonotopes is not monotonous in \({\epsilon }_i\) due to the nonmonotonic transformers used for nonlinearities (e.g., ReLU). This renders the application of binary search a besteffort approximation. As we don’t require a formal maximum but rather aim to solve a surrogate for Problem 2, this still works well in practice. Further note that, applying \(\alpha _{\text {Box}}\) to templates introduces imprecision, i.e. \(V_T\) might not be able to prove properties over templates that it could without the application of \(\alpha _{\text {Box}}\). However, Theorem 2 (which only requires properties of \(V_S\)) still applies.
Templates at Multiple Layers. We can extend this approach to obtain templates at multiple layers without a large increase in computational cost. With templates at multiple layers, we first try to match the propagated shape against the earliest template layer and upon failure propagate it further to the next, where we again attempt to match the template. In Algorithm 1, this means repeating the block from Line 4 to Line 10 for each template layer before going on to the check on Line 11.
The full template generation procedure is given in Algorithm 2. First, we perform a binary search over \({\epsilon }_i\) (Line 6) to find the largest \({\epsilon }_i\), for which the specification is verifiable. Then for each layer k in the set of layers K at which we are creating templates we create a box \(T_k\) from the Zonotope. As this \(T_k\) may not be verifiable, due to the imprecision added in \(\alpha _{\text {Box}}\), we then perform another binary search for the largest scaling factor \(\beta _k\) (Line 10), which is applied to the matrix \({\boldsymbol{A}}\) in Eq. (3). We denote this operation as \(\beta _k T_k\). We show an example for a single layer k in Fig. 6. The blue area outlines the Zonotope found via Line 6, which is verifiable as it is fully on one side of the decision boundary (red, dashed). After applying \(\alpha _{\text {Box}}\) (orange), however, is not (crosses the decision boundary). By scaling it with \(\beta _k\) the shape is verifiable again (green) and used as a template.
4.4 Geometric Robustness
Geometric robustness verification [3, 13, 28, 32] aims to verify the robustness of neural networks against geometric transformations such as image rotations or translations. These transformations typically include an interpolation operation. For example consider rotation \(R_{\gamma }\) of an image by \(\gamma \in \varGamma \) degrees for an interval \(\varGamma \) (e.g., \(\gamma \in [5, 5]\)), for which we consider the specification \(\mathcal {I}_{\varGamma }({\boldsymbol{x}}) := \{R_{\gamma }({\boldsymbol{x}}) \mid \gamma \in \varGamma \}\). We note that, unlike \(\ell _\infty \) and patch verification, the input regions for geometric transformations are nonlinear and have no closedform solutions. Thus, an overapproximation of the input region must be obtained [3]. For large \(\varGamma \), the approximate input region \(\mathcal {I}_{\varGamma }({\boldsymbol{x}})\), can be too coarse resulting in imprecise verification. Hence, in order to assert \(\psi \) on \(\mathcal {I}_{\varGamma }\), existing stateoftheart approaches [3], split \(\varGamma \) into r smaller ranges \(\varGamma _{1}, \dots , \varGamma _{r}\) and then verify the resulting r specifications \((\mathcal {I}_{\varGamma _{i}}, \psi )\) for \(i \in 1, \dots , r\). These smaller perturbations share similarities facilitating proof sharing. We instantiate our approach similar to Sect. 4.3. A key difference to Sect. 4.3 is that while \({\boldsymbol{x}}\in \mathcal {I}_{p \times p}^{i,j}({\boldsymbol{x}})\) for all i, j in patches, here in general \({\boldsymbol{x}}\not \in \mathcal {I}_{\varGamma _{i}}({\boldsymbol{x}})\) for most i. Therefore, the individual perturbations \(\mathcal {I}_{i}({\boldsymbol{x}})\) do not overlap. To account for this, we consider m templates and split \(\varGamma \) into m equally sized chunks (unrelated to the r splits) obtaining the angles \(\gamma _{1}, \dots , \gamma _{m}\) at the center of each chunk. For m templates we then consider the perturbations \(\hat{\mathcal {I}}_{i} := \mathcal {I}_{\epsilon _{i}}(R_{\gamma _{i}}({\boldsymbol{x}}))\), denoting the \(\ell _{\infty }\) perturbation of size \({\epsilon }_{i}\) around the \(\gamma _{i}\) degree rotated \({\boldsymbol{x}}\). To find the template we employ a procedure analogous to Algorithm 2.
4.5 Requirements for Proof Sharing
Now, we discuss the requirements on the neural network N such that proof sharing via templates works well. For simplicity, we discuss simple perdimension box bounds propagation for \(V_S\) and \(V_T\). However, similar arguments can be made for more complex relational abstractions such as Zonotopes or Polyhedra.
In order for an abstraction S to match to a template T, we need to show interval inclusion for each dimension. For a particular dimension i this can occur in two ways: (i) when both S and T are just a point in that dimension and these points coincide, e.g., \(a^S_i = a^T_i\), or (ii) when \(a^S_i \pm d^S_i \subseteq a^T_i \pm d^T_i\). While particularly in ReLU networks, the first case can occur after a ReLU layer sets values to zero, we focus our analysis here on the second case as it is more common. In this case, the width of T in that dimension \(d^T_i\) must be sufficient to cover S. Ignoring case (i) and letting \(\mathrm {supp}(T)\) denote the dimensions in which \(d_i^T > 0\), we can pose that \(\mathrm {supp}(S) \subseteq \mathrm {supp}(T)\) as a necessary condition for inclusion. While it is in general hard to argue about the magnitudes of these values, this approach still provides an intuition. When starting from input specifications \(\mathrm {supp}(\mathcal {I}) \not \subseteq \mathrm {supp}(\hat{\mathcal {I}})\), \(\mathrm {supp}(S) \subseteq \mathrm {supp}(T)\) can only occur if during propagation through the neural network \(N_{1:k}\) the mass in \(\mathrm {supp}(\hat{\mathcal {I}})\) can “spread out” sufficiently to cover \(\mathrm {supp}(S)\). In the fully connected neural networks that we discuss here, the matrices of linear layers provide this possibility. However, in networks that only read part of the input at a time such as recurrent neural networks, or convolutional neural networks in which only locally neighboring inputs feed into the respective output in the next layer, these connections do not necessarily exist. This makes proof sharing hard until layers later in the neural network, that regionally or globally pool information. As this increases the depth of the layer k at which proof transfer can be applied, this also decreases the potential speedup of proof transfer. This could be alleviated by different ways of creating templates, which we plan to investigate in the future.
5 Experimental Evaluation
We now experimentally evaluate the effectiveness of our algorithms from Sect. 4.
5.1 Experimental Setup
We consider the verification of robustness to adversarial patch attacks and geometric transformations in Sect. 5.2 and Sect. 5.3, respectively. We define specifications on the first 100 test set images each from the MNIST [22] and the CIFAR10 dataset [20] (“CIFAR”) as with repetitions and parameter variations the overall runtime becomes high. We use DeepZ [31] as the baseline verifier as well as for \(V_S\) and \(V_T\) [31]. Throughout this section, we evaluate proof sharing for two networks on two common datasets: We use a seven layer neural network with 200 neurons per layer (“7\(\,\times \,\)200”) and a nine layer network with 500 neurons per layer (“9\(\,\times \,\)500”) for both the MNIST[22] and CIFAR datasets [20], both utilizing ReLU activations. These architectures are similar to the fullyconnected ones used in the ERAN and Mnistfc VNNComp categories [2].
For MNIST, we train 100 epochs, enumerating all patch locations for each sample, and for CIFAR we train for 600 with 10 random patch locations, as outlined in [10] with interval training [16, 24]. On MNIST the 7\(\,\times \,\)200 and the 9\(\,\times \,\)500 achieve a natural accuracy of 98.3% and 95.3% respectively. For CIFAR, these values are 48.8% and 48.1% respectively. Our implementation utilizes PyTorch [25] and is evaluated on Ubuntu 18.04 with an Intel Core i99900K CPU and 64 GB RAM. For all timing results, we provide the mean over three runs.
5.2 Robustness Against Adversarial Patches
For MNIST, containing \(28 \times 28\) images, as outlined in Sect. 4.3, in order to verify inputs to be robust against \(2 \times 2\) patch perturbations, 729 individual perturbations must be verified. Only if all are verified, the overall property can be verified for a given image. Similarly, for CIFAR, containing \(32 \times 32\) color images, there are 961 individual perturbations (the patch is applied over all color channels).
We now investigate the two main parameters of Algorithm 2: the masks \(\mu _{1}, \dots , \mu _{m}\) and the layers \(k \in K\). We first study the impact of the layer k used for creating the template. To this end, we consider the 7\(\,\times \,\)200 networks, use \(m=1\) (covering the whole image; equivalent to \(\hat{\mathcal {I}}_{\epsilon }\)). Table 3 shows the corresponding template matching rates, and the overall percentage of individual patches that can be verified “patches verif.”. (The overall percentage of images for which \(\mathcal {I}_{2\times 2}\) is true is reported as “verif.” in Table 6.) Table 4 shows the corresponding verification times (including the template generation). We observe that many template matches can already be made at the second or third layer. As creating templates simultaneously at the second and third layer works well for both datasets, we utilize templates at these layers in further experiments.
Next, we investigate the impact of the pixel masks \(\mu _{1}, \dots , \mu _{m}\). To this end, we consider three different settings, as showcased in Fig. 5 earlier: (i) the full image (\(\ell _{\infty }\)mask as before; \(m=1\)), (ii) “center + border” (\(m=2\)), where we consider the \(6 \times 6\) center pixel as one group and all others as another, and (iii) the \(2 \times 2\) grid (\(m = 4\)) where we split the image into equally sized quarters.
As we can see in Table 5, for higher m more patches can be matched to the templates, indicating that our optimization procedure is a good approximation to Problem 2, which only considers the number of templates matched. Yet, for \(m>1\) the increase in matching rate p does not offset the additional time in template generation and matching. Thus, \(m=1\) results in a better tradeoff. This result highlights the tradeoffs discussed throughout Sect. 3 and Sect. 4. Based on this investigation we now, in Table 6, evaluate all networks and datasets using \(m=1\) and template generation at layers 2 and 3. In all cases, we obtain a speed up between 1.2 to \(2 \times \) over the baseline verifier. Going from \(2 \times 2\) to \(3 \times 3\) patches speed ups remain around 1.6 and 1.3 for the two datasets respectively.
Comparison with Theoretically Achievable SpeedUp. Finally, we want to determine the maximal possible speedup with proof sharing and see how much of this potential is realized by our method. To this end we investigate the same setting and network as in Table 3. We let \(t^{BL}\) and \(t^{PS}\) denote the runtime of the base verifier without and with proof sharing respectively. Similar to the discussion in Sect. 4 we can break down \(t^{PS}\) into \(t_T\) (template generation time), \(t_S\) (time to propagate one input to layer k), \(t_{\subseteq }\) (time to perform template matching) and \(t_\psi \) (time to verify S if no match). Table 7 shows different ratios of these quantities. For all, we assume a perfect matching rate at layer k and calculate the achievable speedup for patch verification on MNIST. Comparing the optimal and realized results, we see that at layers 3 and 4 our template generation algorithm, despite only approximately solving Problem 2 achieves nearoptimal speedup. By removing the time for template matching and template generation we can see that, at deeper layers, speeding up \(t_{\subseteq }\) and \(t_{\mathcal {T}}\) only yield diminishing returns.
5.3 Robustness Against Geometric Perturbations
For the verification of geometric perturbations, we take 100 images from the MNIST dataset and the 7\(\,\times \,\)200 neural network from Sect. 5.2. In Table 8, we consider an input region with ±2° rotation, ±10% contrast and ±1% brightness change, inspired by [3]. To verify this region, similar to existing approaches [3], we choose to split the rotation into r regions, each yielding a Box specification over the input. Here we use \(m=1\), a single template, with the largest verifiable \(\epsilon \) found via binary search. We observe that as we increase r, the verification rate increases, but also the speed ups. Proof sharing enables significant speedup between 1.6 to \(2.9 \times \).
Finally, we investigate the impact of the number of templates m. To this end, we consider a setting with a large parameter space: ±40° rotation generated input region with \(r=200\). In Table 9, we evaluate this for m templates obtained from the \(\ell _{\infty }\) input perturbation around m equally spaced rotations, where we apply binary search to find \(\epsilon _i\) tailored for each template. Again we observe that \(m>1\) allows more templates matches. However, in this setting the relative increase is much larger than for patches, thus making \(m=3\) faster than \(m=1\).
5.4 Discussion
We have shown that proof sharing can achieve speedups over conventional execution. While the speedup analysis (see Sect. 4 and Table 7) put a ceiling on what is achievable in particular settings, we are optimistic that proof sharing can be an important tool for neural network robustness analysis. In particular, as the size of certifiable neural networks continues to grow, the potential for gains via proof sharing is equally growing. Further, the idea of proof effort reuse can enable efficient verification of larger disjunctive specifications such as the patch or geometric examples considered here. Besides the immediately useful speedups, the concept of proof sharing is interesting in its own right and can provide insights into the learning mechanisms of neural networks.
6 Related Work
Here, we briefly discuss conceptually related work:
Incremental Model Checking The field of model checking aims to show whether a formalized model, e.g. of software or hardware, adheres to a specification. As neural network verification can also be cast as model checking, we review incremental model checking techniques which utilize a similar idea to proof sharing: reuse partial previous computations when checking new models or specifications. Proof sharing has been applied for discovering and reusing lemmas when proving theorems for satisfiability [6], Linear Temporal Logic [7], and modal \(\mu \)calculus [33]. Similarly, caching solvers [35] for Satisfiability Modulo Theories cache obtained results or even the full models used to obtain the solution, with assignments for all variables, allowing for faster verification of subsequent queries. For program analysis tasks that deal with repeated similar inputs (e.g. individual commits in a software project) can leverage partial results [41], constraints [36] precision information [4, 5] from previous runs.
Proof Sharing Between Networks. In neural network verification, some approaches abstract the network to achieve speedups in verification. These simplifications are constructed in a way that the proof can be adapted for the original neural network [1, 43]. Similarly, another family of approaches analyzes the difference between two closely related neural networks by utilizing their structural similarity [26, 27]. Such approaches can be used to reuse analysis results between neural network modifications, e.g. finetuning [9, 37].
In contrast to these works, we do not modify the neural network, but achieve speedups rather by only considering the relaxations obtained in the proofs. [37] additionally consider small changes to the input, however, these are much smaller than the difference in specification we consider here.
7 Conclusion
We introduced the novel concept of proof sharing in the context of neural network verification. We showed how to instantiate this concept, achieving speedups of up to 2 to 3 x for patch verification and geometric verification. We believe that the ideas introduced in this work can serve as a solid foundation for exploring methods that effectively share proofs in neural network verification.
References
Ashok, P., Hashemi, V., Křetínský, J., Mohr, S.: DeepAbstract: neural network abstraction for accelerating verification. In: Hung, D.V., Sokolsky, O. (eds.) ATVA 2020. LNCS, vol. 12302, pp. 92–107. Springer, Cham (2020). https://doi.org/10.1007/9783030591526_5
Bak, S., Liu, C., Johnson, T.T.: The second international verification of neural networks competition. arXiv preprint abs/2109.00498 (2021)
Balunovic, M., Baader, M., Singh, G., Gehr, T., Vechev, M.T.: Certifying geometric robustness of neural networks. In: Neural Information Processing Systems (NIPS) (2019)
Beyer, D., Löwe, S., Novikov, E., Stahlbauer, A., Wendler, P.: Precision reuse for efficient regression verification. In: Symposium on the Foundations of Software Engineering (SIGSOFT) (2013)
Beyer, D., Wendler, P.: Reuse of verification results  conditional model checking, precision reuse, and verification witnesses. In: Bartocci, E., Ramakrishnan, C.R. (eds.) SPIN 2013. LNCS, vol. 7976, pp. 1–17. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642391767_1
Bradley, A.R.: SATbased model checking without unrolling. In: Jhala, R., Schmidt, D. (eds.) VMCAI 2011. LNCS, vol. 6538, pp. 70–87. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642182754_7
Bradley, A.R., Somenzi, F., Hassan, Z., Zhang, Y.: An incremental approach to model checking progress properties. In: International Conference on Formal Methods in ComputerAided Design (FMCAD) (2011)
Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. arXiv preprint abs/1712.09665 (2017)
Cheng, C., Yan, R.: Continuous safety verification of neural networks. In: Design, Automation and Test in Europe Conference and Exhibition (2021)
Chiang, P., Ni, R., Abdelkader, A., Zhu, C., Studer, C., Goldstein, T.: Certified defenses for adversarial patches. In: Proceedings of International Conference on Learning Representations (ICLR) (2020)
Cousot, P., Cousot, R.: Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of Principles of Programming Languages (POPL) (1977)
Cousot, P., Halbwachs, N.: Automatic discovery of linear restraints among variables of a program. In: Proceedings of Principles of Programming Languages (POPL) (1978)
Fischer, M., Baader, M., Vechev, M.T.: Certified defense to image transformations via randomized smoothing. In: Neural Information Processing Systems (NIPS) (2020)
Gehr, T., Mirman, M., DrachslerCohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.T.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: Symposium on Security and Privacy (S &P) (2018)
Goubault, E., Putot, S.: A zonotopic framework for functional abstractions. Formal Methods Syst. Des. 47(3), 302–360 (2016). https://doi.org/10.1007/s107030150238z
Gowal, S., et al.: On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint abs/1810.12715 (2018)
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/9783319633879_5
Katz, G., et al.: The Marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/9783030255404_26
Kopetzki, A., Schürmann, B., Althoff, M.: Methods for order reduction of zonotopes. In: Conference on Decision and Control (CDC) (2017)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS) (2012)
LeCun, Y., et al.: Handwritten digit recognition with a backpropagation network. In: Neural Information Processing Systems (NIPS) (1989)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: Proceedings of International Conference on Learning Representations (ICLR) (2018)
Mirman, M., Gehr, T., Vechev, M.T.: Differentiable abstract interpretation for provably robust neural networks. In: Proceedings of International Conference on Machine Learning (ICML), vol. 80 (2018)
Paszke, A., et al.: Pytorch: an imperative style, highperformance deep learning library. In: Neural Information Processing Systems (NIPS) (2019)
Paulsen, B., Wang, J., Wang, C.: RELUDIFF: differential verification of deep neural networks. In: International Conference on Software Engineering (ICSE) (2020)
Paulsen, B., Wang, J., Wang, J., Wang, C.: NEURODIFF: scalable differential verification of neural networks using finegrained approximation. In: Conference on Automated Software Engineering (ASE) (2020)
Pei, K., Cao, Y., Yang, J., Jana, S.: Towards practical verification of machine learning: the case of computer vision systems. arXiv preprint abs/1712.01785 (2017)
Sadraddini, S., Tedrake, R.: Linear encodings for polytope containment problems. In: Conference on Decision and Control (CDC) (2019)
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676) (2017)
Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.T.: Fast and effective robustness certification. In: Neural Information Processing Systems (NIPS) (2018)
Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: An abstract domain for certifying neural networks. PACMPL 3(POPL) (2019)
Sokolsky, O.V., Smolka, S.A.: Incremental model checking in the modal mucalculus. In: Dill, D.L. (ed.) CAV 1994. LNCS, vol. 818, pp. 351–363. Springer, Heidelberg (1994). https://doi.org/10.1007/3540581790_67
Szegedy, C., et al.: Intriguing properties of neural networks. In: Proceedings of International Conference on Learning Representations (ICLR) (2014)
Taljaard, J., Geldenhuys, J., Visser, W.: Constraint caching revisited. In: Lee, R., Jha, S., Mavridou, A., Giannakopoulou, D. (eds.) NFM 2020. LNCS, vol. 12229, pp. 251–266. Springer, Cham (2020). https://doi.org/10.1007/9783030557546_15
Visser, W., Geldenhuys, J., Dwyer, M.B.: Green: reducing, reusing and recycling constraints in program analysis. In: Symposium on the Foundations of Software Engineering (SIGSOFT) (2012)
Wei, T., Liu, C.: Online verification of deep neural networks under domain or weight shift. arXiv preprint abs/2106.12732 (2021)
Weng, T., et al.: Towards fast computation of certified robustness for ReLu networks. In: Proceedings of International Conference on Machine Learning (ICML), vol. 80 (2018)
Wing, J.M.: Trustworthy AI. Commun. ACM 64(10) (2021)
Wong, E., Kolter, J.Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Proceedings of International Conference on Machine Learning (ICML), vol. 80 (2018)
Yang, G., Dwyer, M.B., Rothermel, G.: Regression model checking. In: International Conference on Software Maintenance (ICSM) (2009)
Zhang, H., Weng, T., Chen, P., Hsieh, C., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: Neural Information Processing Systems (NIPS) (2018)
Zhong, Y., Ta, Q.T., Luo, T., Zhang, F., Khoo, S.C.: Scalable and modular robustness analysis of deep neural networks. In: Oh, H. (ed.) APLAS 2021. LNCS, vol. 13008, pp. 3–22. Springer, Cham (2021). https://doi.org/10.1007/9783030890513_1
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Author(s)
About this paper
Cite this paper
Fischer, M., Sprecher, C., Dimitrov, D.I., Singh, G., Vechev, M. (2022). Shared Certificates for Neural Network Verification. In: Shoham, S., Vizel, Y. (eds) Computer Aided Verification. CAV 2022. Lecture Notes in Computer Science, vol 13371. Springer, Cham. https://doi.org/10.1007/9783031131851_7
Download citation
DOI: https://doi.org/10.1007/9783031131851_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783031131844
Online ISBN: 9783031131851
eBook Packages: Computer ScienceComputer Science (R0)