SyReNN: A Tool for Analyzing Deep Neural Networks

Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains. Formally, DNNs are complicated vector-valued functions which come in a variety of sizes and applications. Unfortunately, modern DNNs have been shown to be vulnerable to a variety of attacks and buggy behavior. This has motivated recent work in formally analyzing the properties of such DNNs. This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation. The key insight is to decompose the DNN into linear functions. Our tool is designed for analyses using low-dimensional subsets of the input space, a unique design point in the space of DNN analysis tools. We describe the tool and the underlying theory, then evaluate its use and performance on three case studies: computing Integrated Gradients, visualizing a DNN’s decision boundaries, and patching a DNN.


Introduction
Deep Neural Networks (DNNs) [19] have become the state-of-the-art in a variety of applications including image recognition [54,34] and natural language processing [12].Moreover, they are increasingly used in safety-and security-critical applications such as autonomous vehicles [32] and medical diagnosis [10,39,29,38].These advances have been accelerated by improved hardware and algorithms.
DNNs (Section 2) are programs that compute a vector-valued function, i.e., from R n to R m .They are straight-line programs written as a concatenation of alternating linear and non-linear layers.The coefficients of the linear layers are learned from data via gradient descent during a training process.A number of different non-linear layers (called activation functions) are commonly used, including the rectified linear and maximum pooling functions.
Owing to the variety of application domains as well as deployment constraints, DNNs come in many different sizes.For instance, large image-recognition and natural-language processing models are trained and deployed using cloud resources [34,12], medium-size models could be trained in the cloud but deployed on hardware with limited resources [32], and finally small models could be trained and deployed directly on edge devices [48,9,23,35,36].There has also been a recent push to compress trained models to reduce their size [25].Such smaller models play an especially important role in privacy-critical applications, such as wake word detection for voice assistants, because they allow sensitive user data to stay on the user's own device instead of needing to be sent to a remote computer for processing.
Although DNNs are very popular, they are not perfect.One particularly concerning development is that modern DNNs have been shown to be extremely vulnerable to adversarial examples, inputs which are intentionally manipulated to appear unmodified to humans but become misclassified by the DNN [55,20,41,8].Similarly, fooling examples are inputs that look like random noise to humans, but are classified with high confidence by DNNs [42].Mistakes made by DNNs have led to loss of life [37,18] and wrongful arrests [27,28].For this reason, it is important to develop techniques for analyzing, understanding, and repairing DNNs.
This paper introduces SyReNN, a tool for understanding and analyzing DNNs.SyReNN implements state-of-the-art algorithms for computing precise symbolic representations of piecewise-linear DNNs (Section 3).Given an input subspace of a DNN, SyReNN computes a symbolic representation that decomposes the behavior of the DNN into finitely-many linear functions.SyReNN implements the one-dimensional analysis algorithm of Sotoudeh and Thakur [51] and extends it to the two-dimensional setting as described in Section 4.
Key insights.There are two key insights enabling this approach, first identified in Sotoudeh and Thakur [51].First, most popular DNN architectures today are piecewise-linear, meaning they can be precisely decomposed into finitelymany linear functions.This allows us to reduce their analysis to equivalent questions in linear algebra, one of the most well-understood fields of modern mathematics.Second, many applications only require analyzing the behavior of the DNN on a low-dimensional subset of the input space.Hence, whereas prior work has attempted to give up precision for efficiency in analyzing highdimensional input regions [49,50,17], our work has focused on algorithms that are both efficient and precise in analyzing lower-dimensional regions (Section 4).
Tool design.The SyReNN tool is designed to be easy to use and extend, as well as efficient (Section 5).The core of SyReNN is written as a highly-optimized, parallel C++ server using Intel TBB for parallelization [46] and Eigen for matrix operations [24].A user-friendly Python front-end interfaces with the PyTorch deep learning framework [45].
Use cases.We demonstrate the utility of SyReNN using three applications.The first computes Integrated Gradients (IG), a state-of-the-art measure used to determine which input dimensions (e.g., pixels for an image-recognition network) were most important in the final classification produced by the network (Section 6.1).The second precisely visualizes the decision boundaries of a DNN (Section 6.2).The last patches (repairs) a DNN to satisfy some desired specification involving infinitely-many points (Section 6.3).Thus, we believe that SyReNN is an interesting and useful tool in the toolbox for understanding and analyzing DNNs.Contributions.The contributions of this paper are: -A definition of symbolic representation of DNNs (Section 3).
-An efficient algorithm for computing symbolic representations for DNNs over low-dimensional input subspaces (Section 4).-A design of a usable and well-engineered tool implementing these ideas called SyReNN (Section 5).-Three applications of SyReNN (Section 6).

Preliminaries
We now formally define the notion of DNN we will use in this paper.
Our work is primarily concerned with the popular class of piecewise-linear DNNs, defined below.In this definition and the rest of this paper, we will use the term "polytope" to mean a convex and bounded polytope except where specified.Definition 2. A function f : R n → R m is piecewise-linear (PWL) if its input domain R n can be partitioned into finitely-many possibly-unbounded polytopes X 1 , X 2 , . . ., X k such that f Xi is linear for every X i .
The most common activation function used today is the ReLU function, a PWL activation function which is defined below.Definition 3. The rectified linear function (ReLU) is a function ReLU : R n → R m defined component-wise by where ReLU( v) i is the ith component of the vector ReLU( v) and v i is the ith component of the vector v.
In order to see that ReLU is PWL, we must show that its input domain R n can be partitioned such that, in each partition, ReLU is linear.In this case, we can use the orthants of R n as our partitioning: within each orthant, the signs of the components do not change hence ReLU is the linear function that just zeros out the negative components.
Although we focus on ReLU due to its popularity and expository power, SyReNN works with a number of other popular PWL layers include MaxPool, Leaky ReLU, Hard Tanh, Fully-Connected, and Convolutional layers, as defined in [19].PWL layers have become exceedingly common.In fact, nearly all of the state-of-the-art image recognition models bundled with Pytorch [44] are PWL.
The DNN's input-output behavior on the domain [−1, 2] is shown in Figure 1.

A Symbolic Representation of DNNs
We formalize the symbolic representation according to the following definition: Definition 4. Given a PWL function f : R n → R m and a bounded convex polytope X ⊆ R n , we define the symbolic representation of f on X, written f X , to be a finite set of polytopes f X = {P 1 , . . ., P n }, such that: 1.The set {P 1 , P 2 , . . ., P n } partitions X, except possibly for overlapping boundaries.2. Each P i is a bounded convex polytope.3. Within each P i , the function f Pi is linear.
Notably, if f is a DNN using only PWL layers, then f is PWL and so we can define f X .This symbolic representation allows one to reduce questions about the DNN f to questions about finitely-many linear functions F i .For example, because linear functions are convex, to verify that ∀x ∈ X. f (x) ∈ Y for some polytope Y , it suffices to verify ∀P i ∈ f X .∀v ∈ Vert(P i ).f ( v) ∈ Y , where Vert(P i ) is the (finite) set of vertices for the bounded convex polytope P i ; thus, here both of the quantifiers are over finite sets.The symbolic representation described above can be seen as a generalization of the ExactLine representation [51], which considered only one-dimensional restriction domains of interest.
Example 2. Consider again the DNN f : R 1 → R 1 given by 2].The input-output behavior of f on X is shown in Figure 1.From this, we can see that Within each of these partitions, the input-output behavior is linear, which for R 1 → R 1 we can see visually as just a line segment.As this set fully partitions X, then, this is a valid f X .

Computing the Symbolic Representation
This section presents an efficient algorithm for computing f X for a DNN f composed of PWL layers.To retain both scalability and precision, we will require the input region X be two-dimensional.This design choice is relatively unexplored in the neural-network analysis literature (most analyses strike a balance between precision and scalability, ignoring dimensionality).We show that, for two-dimensional X, we can use an efficient polytope representation to produce an algorithm that demonstrates good best-case and in-practice efficiency while retaining full precision.This algorithm represents a direct generalization of the approach of [51].The difficulties our algorithm addresses arise from three areas.First, when computing f X there may be exponentially many such partitions on all of R n but only a small number of them may intersect with X.Consequently, the algorithm needs to be able to find those partitions that intersect with X efficiently without explicitly listing all of the partitions on R n .Second, it is often more convenient to specify the partitioning via hyperplanes separating the partitions than explicit polytopes.For example, for the one-dimensional ReLU function we may simply state that the line x = 0 separates the two partitions, because ReLU is linear both in the region x ≤ 0 and x ≥ 0. Finally, neural networks are typically composed of sequences of linear and piecewise-linear layers, where the partitioning imposed by each layer individually may be well-understood but their composition is more complex.For example, identifying the linear partitions of y = ReLU(4 • ReLU(−3x − 1) + 2) is non-trivial, even though we know the linear partitions of each composed function individually.
Our algorithm only requires the user to specify the hyperplanes defining the partitioning for the activation function used in each layer; our current implementation comes with support for common PWL activation functions.For example, if a ReLU layer is used for an n-dimensional input vector, then the hyperplanes would be defined by the equations x 1 = 0, x 2 = 0, . . ., x n = 0.It then computes the symbolic representation for a single layer at a time, composing them sequentially to compute the symbolic representation across the entire network.
To allow such compositions of layers, instead of directly computing f X , we will define another primitive, denoted by the operator ⊗ and sometimes referred to as Extend, such that (1) and let I : x → x be the identity map.I is linear across its entire input space, and, thus, I X = {X}.By the definition of , where the final equality holds by the definition of the identity map I.We can then iteratively apply this procedure to inductively compute ( which is the required symbolic representation.

Algorithm for Extend
Algorithm 1 present an algorithm for computing Extend for arbitrary PWL functions, where Extend(h, g) = h ⊗ g = h • g.Geometric intuition for the algorithm.Consider the ReLU function (Definition 3).It can be shown that, within any orthant (i.e., when the signs of all coefficients are held constant), ReLU( x) is equivalent to some linear function, in particular the element-wise product of x with a vector that zeroes out the negative-signed components.However, for our algorithm, all we need to know is that the linear partitions of ReLU (in this case the orthants) are separated by hyperplanes x 1 = 0, x 2 = 0, . . ., x n = 0. Given a two-dimensional convex bounded polytope X, the execution of the algorithm for f = ReLU can be visualized as follows.We pick some vertex v of X, and begin traversing the boundary of the polytope in counter-clockwise order.If we hit an orthant boundary (corresponding to some hyperplane x i = 0), it implies that the behavior of the function behaves differently at the points of the polytope to one side of the boundary from those at the other side of the boundary.Thus, we partition X into X 1 and X 2 , where X 1 lies to one side of the hyperplane and X 2 lies to the other side.We recursively apply this procedure to X 1 and X 2 until the resulting polytopes all lie on exactly one side of every hyperplane (orthant boundary).But lying on exactly one side of every hyperplane (orthant boundary) implies each polytope lies entirely within a linear partition of the function (a single orthant), hence the application of the function on that polytope is linear, and hence we have our partitioning.
Functions used in algorithm.Given a two-dimensional bounded convex polytope X, Vert(X) returns a list of its vertices in counter-clockwise order, repeating the initial vertex at the end.Given a set of points X, ConvexHull(X) represents their convex hull (the smallest bounded polytope containing every point in X).Given a scalar value x, Sign(x) computes the sign of that value (i.e., −1 if x < 0, +1 if x > 0, and 0 if x = 0).

Algorithm description.
The key insight of the algorithm is to recursively partition the polytopes until such a partition lies entirely within a linear region of the function f .Algorithm 1 begins by constructing a queue containing the polytopes of g X .Each iteration either removes a polytope from the queue that lies entirely in one linear region (placing it in Y ), or splits (partitions) some polytope into two smaller polytopes that get put back into the queue.When we pop a polytope P from the queue, Line 6 iterates over all hyperplanes N k •x = b k defining the piecewise-linear partitioning of f , looking for any for which some vertex V i lies on the positive side of the hyperplane and another vertex V j lies on the negative side of the hyperplane.If none exist (Line 7), by convexity we are guaranteed that the entire polytope lies entirely on one side with respect to every hyperplane, meaning it lies entirely within a linear partition of f .Thus, we can add it to Y and continue.If two such vertices are found (starting Line 10), then we can find "extreme" i and j indices such that V i is the last vertex in a counter-clockwise traversal to lie on the same side of the hyperplane as V 1 and V j is the last vertex lying on the opposite side of the hyperplane.We then call SplitPlane() (Algorithm 2) to actually partition the polytope on opposite sides of the hyperplane, adding both to our worklist.
In the best case, each partition is in a single orthant: the algorithm never calls SplitPlane() at all -it merely iterates over all of the n input partitions, checks their v vertices, and appends to the resulting set (for a best-case complexity of O(nv)).In the worst case, it splits each polytope in the queue on each face, resulting in exponential time complexity.As we will show in Section 6, this exponential worst-case behavior is not encountered in practice, thus making SyReNN a practical tool for DNN analysis.
such that, within any partition imposed by the hyperplanes f is equivalent to some affine function.We then find that i on line 11 should be the last vertex on the first side of the hyperplane, while j should be the last vertex on the other side of the hyperplane.We will assume things are oriented so that i = v 1 and j = v 3 .Then SplitPlane is called, which adds new vertices p i = v 4 (shown in Figure 2b) where the edge v 1 → v 2 intersects the hyperplane, as well as p j = v 5 Algorithm 2: SplitPlane(V, g, i, j, N, b) Input: V , the vertices of the polytope in the input space of g.A function g. i is the index of the last vertex lying on the same side of the orthant face as V1.j is the index of the last vertex lying on the opposite side of the orthant face as V1.N and b define the hyperplane N • x = b to split on.Output: {P1, P2}, two sets of vertices whose convex hulls form a partitioning of V such that each lies on only one side of the N • x = b hyperplane.
where the edge v 3 → v 1 intersects the hyperplane.Separating all of the vertices on the left of the hyperplane from those on the right, we find that this has partitioned the original polytope into two sub-polytopes, each on exactly one side of the hyperplane, as desired.If there were more intersecting hyperplanes, we would then recurse on each of the newly-generated polytopes to further subdivide them by the other hyperplanes.

Representing Polytopes
We close this section with a discussion of implementation concerns when representing the convex polytopes that make up the partitioning of f X .In standard computational geometry, bounded polytopes can be represented in two equivalent forms: 1.The half-space or H-representation, which encodes the polytope as an intersection of finitely-many half-spaces.(Each half-space being defined as a halfspace defined by an affine inequality Ax ≤ b.) 2. The vertex or V-representation, which encodes the polytope as a set of finitely many points; the polytope is then taken to be the convex hull of the points (i.e., smallest convex shape containing all of the points).
Certain operations are more efficient when using one representation compared to the other.For example, finding the intersection of two polytopes in an Hrepresentation can be done in linear time by concatenating their representative half-spaces, but the same is not possible in V-representation.
There are two main operations on polytopes we need perform in our algorithms: (i) splitting a polytope with a hyperplane, and (ii) applying an affine map to all points in the polytope.In general, the first is more efficient in an H-representation, while the latter is more efficient in a V-representation.However, when restricted to two-dimensional polygons, the former is also efficient in a V-representation, as demonstrated by Algorithm 2, helping to motivate our use of the V-representation in our algorithm.
Furthermore, the two polytope representations have different resiliency to floating-point operations.In particular, H-representations for polytopes in R n are notoriously difficult to achieve high-precision with, because the error introduced from using floating point numbers gets arbitrarily large as one goes in a particular direction along any hyperplane face.Ideally, we would like the hyperplane to be most accurate in the region of the polytope itself, which corresponds to choosing the magnitude of the norm vector correctly.Unfortunately, to our knowledge, there is no efficient algorithm for computing the ideal floating point H-representation of a polytope, although libraries such as APRON [31] are able to provide reasonable results for low-dimensional spaces.However, because neural networks utilize extremely high-dimensional spaces (often hundreds or thousands of dimensions) and we wish to iteratively apply our analysis, we find that errors from using floating-point H-representations can quickly multiply and compound to become infeasible.By contrast, floating-point inaccuracies in a V-representation are directly interpretable as slightly misplacing the vertices of the polytope; no "localization" process is necessary to penalize inaccuracies close to the polytope more than those far away from it.
Another difference is in the space complexity of the representation.In general, H-representations can be more space-efficient for common shapes than Vrepresentations.However, when the polytope lies in a low-dimensional subspace of a larger space, the V-representation is usually significantly more efficient.
Thus, V-representations are a good choice for low-dimensionality polytopes embedded in high-dimensional space, which is exactly what we need for analyzing neural networks with two-dimensional restriction domains of interest.This is why we designed our algorithms to rely on Vert(X), so that they could be directly computed on a V-representation.

Extending to Higher-Dimensional Subsets of the Input Space
The 2D algorithm described above can be seen as implementing the recursive case of a more general, n-dimensional version of the algorithm that recurses on each of the (n − 1)-dimensional facets.In 2D, we trace the edges (1D faces) and use the 1D algorithm from [51] to subdivide them based on intersections with the hyperplanes defining the function.More generally, for an arbitrary n-dimensional polytope we can trace the (n − 1)-dimensional facets of the polytope, recursively applying the (n − 1)-dimensional variant of the algorithm to split those facets according to the linear partitions of the function.
We have experimented with such approaches, but found that the overhead of keeping track of all (n − k)-dimensional faces (commonly known as the face poset or combinatorial structure [16] of a polytope) was too large in higher dimensions.The two-dimensional algorithm addresses this concern by storing the combinatorial structure implicitly, representing 2D polytopes by their vertices in counter-clockwise order, from which edges correspond exactly to sequential vertices.To our knowledge, such a compact representation allowing arbitrary (n − k)-dimensional faces to be read off is not known for higher-dimensional polytopes.Nonetheless, we hope that extending our algorithms to GPUs and other massively-parallel hardware may improve performance to mitigate such overhead.

SyReNN tool
This section provides more details about the design and implementation of our tool, SyReNN (Symbolic Representations of Neural Networks), which computes f X , where f is a DNN using only piecewise-linear layers and X is a union of one-or two-dimensional polytopes.The tool is open-source; it is available under the MIT license at https://github.com/95616ARG/SyReNNand in the PyPI package pysyrenn.Input and output format.SyReNN supports reading DNNs from two standard formats: ERAN (a textual format used by the ERAN project [1]) as well as ONNX (an industry-standard format supporting a wide variety of different models) [43].Internally, the input DNN is described as an instance of the Network class, which is itself a list of sequential Layers.A number of layer types are provided by SyReNN, including FullyConnectedLayer, ConvolutionalLayer, and ReLULayer.To support more complicated DNN architectures, we have implemented a ConcatLayer, which represents a concatenation of the output of two different layers.The input region of interest, X, is defined as a polytope described by a list of its vertices in counter-clockwise order.The output of the tool is the symbolic representation f X .Overall Architecture.We designed SyReNN in a client-server architecture using gRPC [21] and protocol buffers [22] as a standard method of communication between the two.This architecture allows the bulk of the heavy computation to be done in efficient C++ code, while allowing user-friendly interfaces in a variety of languages.It also allows practitioners to run the server remotely on a more powerful machine if necessary.The C++ server implementation uses the Intel TBB library for parallelization.Our official front-end library is written in Python, and available as a package on PyPI so installation is as simple as pip install pysyrenn.The entire project can be built using the Bazel build system, which manages dependencies using checksums.Server Architecture.The major algorithms are implemented as a gRPC server written in C++.When a connection is first made, the server initializes the state with an empty DNN f (x) = x.During the session, three operations are permitted: (i) append a layer g so that the current session's DNN is updated from f 0 to f 1 (x) := g(f 0 (x)), (ii) compute f X for a one-dimensional X, or (iii) compute f X for a two-dimensional X.We have separate methods for one-and two-dimensional X, because the one-dimensional case has specific optimizations for controlling memory usage.The SegmentedLine and UPolytope types are used to represent one-and two-dimensional partitions of X, respectively.When operation (1) is performed, a new instance of the LayerTransformer class is initialized with the relevant parameters and added to a running vector of the current layers.When operation (2) is performed, a new queue of SegmentedLines is constructed, corresponding to X, and the before-allocated LayerTransformers are applied sequentially to compute f X .In this case, extra control is provided to automatically gauge memory usage and pause computation for portions of X until more memory is made available.Finally, when operation ( 3) is a performed, a new instance of UPolytope is initialized with the vertices of X and the LayerTransformers are again applied sequentially to compute f X .
Client Architecture.Our Python client exposes an interface for defining DNNs similar to the popular Sequential-Network Keras API [11].Objects represent individual layers in the network, and they can be combined sequentially into a Network instance.The key addition of our library is that this Network exposes methods for computing f X given a V-representation description of X.To do this, it invokes the server and passes a layer-by-layer description of f followed by the polytope X, then parses the response f X .
Extending to support different layer types.Different layer types and activation functions are supported by sub-classing the LayerTransformer class.Instances of LayerTransformer expose a method for computing Extend(h, •) for the corresponding layer h.To simplify implementation, two sub-classes of LayerTransformer are provided: one for entirely-linear layers (such as fullyconnected and convolutional layers), and one for piecewise-linear layers.For fully-linear layers, all that needs to be provided is a method computing the layer function itself.For piecewise-linear layers, two methods need to be provided: (1) computing the layer function itself, and (2) one describing the hyperplanes which separate the linear regions.The base class then directly implements Algorithm 1 for that layer.This architecture makes supporting new layers a straightforward process.
Float Safety.Like Reluplex [33], SyReNN uses floating-point arithmetic to compute f X efficiently.Unfortunately, this means that in some cases its results will not be entirely precise when compared to a real-valued or multiple-precision version of the algorithm.If a perfectly precise solution is required, the server code can be modified to use multiple-precision rationals instead of floats.Alternatively, a confirmation pass can be run using multiple-precision numbers after the initial float computation to confirm the accuracy of its results.The use of over-approximations may also be explored for ensuring correctness with floatingpoint evaluation, like in DeepPoly [50].Unfortunately, our algorithm does not directly lift to using such approximations, since they may blow the originally-2D region into a higher-dimensional (but very "flat") over-approximate polytope, preventing us from applying the 2D algorithm for the next layer.

Applications of SyReNN
This section presents the use of SyReNN in three example case studies.

Integrated Gradients
A common problem in the field of explainable machine learning is understanding why a DNN made the prediction it did.For example, given an image classified by a DNN as a 'cat,' why did the DNN decide it was a cat instead of, say, a dog?Were there particular pixels which were particularly important in deciding this?Integrated Gradients (IG) [53] is the state-of-the-art method for computing such model attributions.
Definition 5. Given a DNN f , the integrated gradients along dimension i for input x and baseline x is defined to be: The computed value IG i (x) determines relatively how important the ith input (e.g., pixel) was to the classification.However, exactly computing this integral requires a symbolic, closed form for the gradient of the network.Until [51], it was not known how to compute such a closed-form and so IGs were always only approximated using a samplingbased approach.Unfortunately, because it was unknown how to compute the true value, there was no way for practitioners to determine how accurate their approximations were.This is particularly concerning in fairness applications where an accurate attribution is exceedingly important.
In [51], it was recognized that, when X = ConvexHull({x, x }), f X can be used to exactly compute IG i (x).This is because within each partition of f X the gradient of the network is constant because it behaves as a linear function, and hence the integral can be written as the weighted sum of such finitelymany gradients.1Using our symbolic representation, the exact IG can thus be computed as follows: Where here y i , y i are the endpoints of the segment with y i closer to x and y i closest to x .
Implementation.The helper class IntegratedGradientsHelper is provided by our Python client library.It takes as input a DNN f and a set of (x, x ) input-baseline pairs and then computes IG for each pair.
Empirical Results.In [51] SyReNN was used to show conclusively that existing sampling-based methods were insufficient to adequately approximate the true IG.This realization led to changes in the official IG implementation to use the more-precise trapezoidal sampling method we argued for.

Visualization of DNN Decision Boundaries
Whereas IG helps understand why a DNN made a particular prediction about a single input point, another major task is visualizing the decision boundaries of a DNN on infinitely-many input points.Figure 3 shows a visualization of an ACAS Xu DNN [32] which takes as input the position of an airplane and an approaching attacker, then produces as output one of five advisories instructing the plane, such as "clear of conflict" or to move "weak left."Every point in the diagram represents the relative position of the approaching plane, while the color indicates the advisory.
One approach to such visualizations is to simply sample finitely-many points and extrapolate the behavior on the entire domain from those finitely-many points.However, this approach is imprecise and risks missing vital information because there is no way to know the correct sampling density to use to identify all important features.
Another approach is to use a tool such as DeepPoly [50] to over-approximate the output range of the DNN.However, because DeepPoly is an over-approximation, there may be regions of the input space for which it cannot state with confidence the decision made by the network.In fact, the approximations used by DeepPoly Table 1: Comparing the performance of DNN visualization using SyReNN versus DeepPoly for the ACAS Xu network [32].f X size is the number of partitions in the symbolic representation.SyReNN time is the time taken to compute f X using SyReNN.DeepPoly[k] time is the time taken to compute DeepPoly for approximating decision boundaries with k partitions.Each scenario represents a different two-dimensional slice of the input space; within each slice, the heading of the intruder relative to the ownship along with the speed of each involved plane is fixed.are extremely coarse.A naïve application of DeepPoly to this problem results in it being unable to make claims about any of the input space of interest.In order to utilize it, we must partition the space and run DeepPoly within each partition, which significantly slows down the analysis.Even when using 25 2 partitions, Figure 3b shows that most of the interesting region is still unclassifiable with DeepPoly (shown in white).Only when using 100 2 partitions is DeepPoly able to effectively approximate the decision boundaries, although it is still quite imprecise.
By contrast, f X can be used to exactly determine the decision boundaries on any 2D polytope subset of the input space, which can then be plotted.This is shown in Figure 3a.Furthermore, as shown in Table 1, the approach using f X is significantly faster than that using ERAN, even as we get the precise answer instead of an approximation.Such visualizations can be particularly helpful in identifying issues to be fixed using techniques such as those in Section 6.3.

Implementation. The helper class
PlanesClassifier is provided by our Python client library.It takes as input a DNN f and an input region X, then computes the decision boundaries of f on X. Timing Numbers.Timing comparisons are given in Table 1.We see that SyReNN is quite performant, and the exact SyReNN can be computed more quickly than even a mediocre approximation from DeepPoly using 55 2 partitions.Tests were performed on a dedicated Amazon EC2 c5.metal instance, using BenchExec [5] to limit the number of CPU cores to 16 and RAM to 16GB.

Patching of DNNs
We have now seen how SyReNN can be used to visualize the behavior of a DNN.This can be particularly useful for identifying buggy behavior.For example, in Figure 3a we can see that the decision boundary between "strong right" and "strong left" is not symmetrical.
The final application we consider for SyReNN is patching DNNs to correct undesired behavior.Patching is described formally in [52].Given an initial network N and a specification φ describing desired constraints on the input/output, the goal of patching is to find a small modification to the parameters of N producing a new DNN N that satisfies the constraints in φ.
The key theory behind DNN patching we will use was developed in [52].The key realization of that work is that, for a certain DNN architecture, correcting the network behavior on an infinite, 2D region X is exactly equivalent to correcting its behavior on the finitely-many vertices Vert(P i ) for each of the finitely-many P i ∈ f X .Hence, SyReNN plays a key role in enabling efficient DNN patching.
For this case study, we patched the same aircraft collision-avoidance DNN visualized in Section 6.2.We patched the DNN three times to correct three different buggy behaviors of the network: (i) remove "Pockets" of strong left/strong right in regions that are otherwise weak left/weak right; (ii) remove the "Bands" of weak-left advisory behind and to the left of the plane; and (iii) enforce "Symmetry" across the horizontal.The DNNs before and after patching with different specifications are shown in Figure 4.
Implementation The helper class NetPatcher is provided by our Python client library.It takes as input a DNN f and pairs of input region, output label X i , Y i , then computes a new DNN f which maps all points in each X i into Y i .Timing Numbers.As in Section 6.2, computing f X for use in patching took approximately 10 seconds.

Related Work
The related problem of exact reach set analysis for DNNs was investigated in [59].However, the authors use an algorithm that relies on explicitly enumerating all exponentially-many (2 n ) possible signs at each ReLU layer.By contrast, our algorithm adapts to the actual input polytopes, efficiently restricting its consideration to activations that are actually possible.
Hanin and Rolnick [26] prove theoretical properties about the cardinality of f X for ReLU networks, showing that | f X | is expected to grow polynomially with the number of nodes in the network for randomly-initialized networks.
Thrun [56] and Bastani et al. [4] extract symbolic rules meant to approximate DNNs, which can be thought of as an approximation of the symbolic representation f X .
In particular, the ERAN [1] tool and underlying DeepPoly [50] domain were designed to verify the non-existence of adversarial examples.Breutel et al. [6] presents an iterative refinement algorithm that computes an overapproximation of the weakest precondition as a polytope where the required output is also a polytope.
Scheibler et al. [47] verify the safety of a machine-learning controller using the SMT-solver iSAT3, but support small unrolling depths and basic safety properties.Zhu et al. [61] use a synthesis procedure to generate a safe deterministic program that can enforce safety conditions by monitoring the deployed DNN and preventing potentially unsafe actions.The presence of adversarial and fooling inputs for DNNs as well as applications of DNNs in safety-critical systems has led to efforts to verify and certify DNNs [3,33,14,30,17,7,58,50,2]. Approximate reachability analysis for neural networks safely overapproximates the set of possible outputs [17,59,60,58,13,57].
Prior work in the area of network patching focuses on enforcing constraints on the network during training.DiffAI [40] is an approach to train neural networks that are certifiably robust to adversarial perturbations.DL2 [15] allows for training and querying neural networks with logical constraints.

Conclusion and Future Work
We presented SyReNN, a tool for understanding and analyzing DNNs.Given a piecewise-linear network and a low-dimensional polytope subspace of the input subspace, SyReNN computes a symbolic representation that decomposes the behavior of the DNN into finitely-many linear functions.We showed how to efficiently compute this representation, and presented the design of the corresponding tool.We illustrated the utility of SyReNN on three applications: computing exact IG, visualizing the behavior of DNNs, and patching (repairing) DNNs.
In contrast to prior work, SyReNN explores a unique point in the design space of DNN analysis tools.In particular, instead of trading off precision of the analysis for efficiency, SyReNN focuses on analyzing DNN behavior on lowdimensional subspaces of the domain, for which we can provide both efficiency and precision.
We plan on extending SyReNN to make use of GPUs and other massivelyparallel hardware to more quickly compute f X for large f or X. Techniques to support input polytopes that are greater than two dimensional is also a ripe area of future work.We may also be able to take advantage of the fact that nonconvex polytopes can be represented efficiently in 2D.Extending algorithms for f X to handle architectures such as Recurrent Neural Networks (RNNs) will open up new application areas for SyReNN.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/),which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material.If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Fig. 3 :
Fig. 3: Visualization of decision boundaries for the ACAS Xu network.Using SyReNN (left) quickly produces the exact decision boundaries.Using abstract interpretation-based tools like DeepPoly (middle and right) are slower and produce only imprecise approximations of the decision boundaries.