3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models

3D point cloud models are widely applied in safety-critical scenes, which delivers an urgent need to obtain more solid proofs to verify the robustness of models. Existing verification method for point cloud model is time-expensive and computationally unattainable on large networks. Additionally, they cannot handle the complete PointNet model with joint alignment network (JANet) that contains multiplication layers, which effectively boosts the performance of 3D models. This motivates us to design a more efficient and general framework to verify various architectures of point cloud models. The key challenges in verifying the large-scale complete PointNet models are addressed as dealing with the cross-non-linearity operations in the multiplication layers and the high computational complexity of high-dimensional point cloud inputs and added layers. Thus, we propose an efficient verification framework, 3DVerifier, to tackle both challenges by adopting a linear relaxation function to bound the multiplication layer and combining forward and backward propagation to compute the certified bounds of the outputs of the point cloud models. Our comprehensive experiments demonstrate that 3DVerifier outperforms existing verification algorithms for 3D models in terms of both efficiency and accuracy. Notably, our approach achieves an orders-of-magnitude improvement in verification efficiency for the large network, and the obtained certified bounds are also significantly tighter than the state-of-the-art verifiers. We release our tool 3DVerifier via https://github.com/TrustAI/3DVerifier for use by the community.


Introduction
Recent years have witnessed increasing interest in 3D object detection, and Deep Neural Networks (DNNs) have also demonstrated their remarkable performance in this area (Qi et al, 2017a,b).For 3D object detectors, the point clouds are utilized to represent 3D objects, which are usually the raw data gained from LIDARs and depth cameras.Such 3D deep learning models have been widely employed in multiple safety-critical applications such as motion planning (Varley et al, 2017), virtual reality (Stets et al, 2017), and autonomous driving (Chen et al, 2017;Liang et al, 2018).However, extensive research has shown that DNNs are vulnerable to adversarial attacks, appearing as adding a small amount of non-random, ideally human-invisible, perturbations on the input will cause DNNs to make abominable predictions (Szegedy et al, 2014;Carlini and Wagner, 2017;Jin et al, 2020).Therefore, there is an urgent need to address such safety concerns on DNNs caused by adversarial examples, especially on safety-critical 3D object detection scenarios.
Recently, most works on analyzing the robustness of 3D models mainly focus on adversarial attacks with an aim to reveal the model's vulnerabilities under different types of perturbations, such as adding or removing points, and shifting positions of points (Liu et al, 2019;Zhang et al, 2019a).In the meanwhile, adversarial defenses are also proposed to detect or prevent these adversarial attacks (Zhang et al, 2019a;Zhou et al, 2019).However, as Tramer et al (2020) and Athalye et al (2018) indicated, even though these defenses are effective for some attacks, they still can be broken by other stronger attacks.Thereby, we need a more solid solution, ideally with provable guarantees, to verify whether the model is robust to any adversarial attacks within an allowed perturbation budget1 .This technique is also generally regarded as verification on (local) adversarial robustness2 in the community.So far, various solutions have been proposed to tackle robustness verification, but they mostly focus on the image domain (Boopathy et al, 2019;Singh et al, 2019;Tjeng et al, 2018;Jin et al, 2022).Verifying the adversarial robustness of 3D point cloud models, by contrast, is barely explored by the community.As far as we know, 3DCertify, proposed by Lorenz et al (2021) is the first and also the only work to verify the robustness of 3D models.Although 3DCertify is very inspiring, it has yet completely resolved some key challenges on robustness verification for 3D models according to our empirical study.
Firstly, as the first verification tool for 3D models, 3DCertify is timeconsuming and thus not computationally attainable on large neural networks.As 3DCertify is built upon DeepPoly (Singh et al, 2019), when directly applying the relaxation algorithm that is specifically designed for images on the highdimensional point clouds, it will result in out-of-memory issues and cause the termination of the verification.Moreover, 3DCertify can only verify a simplified PointNet model without Joint Alignment Network (JANet) consisting of matrix multiplication operations.Figure 1 illustrates the abstract architecture of a complete PointNet (Qi et al, 2017a), one of the most widely used models on 3D object detection.As we can see, since the learnt representations are expected to be invariant to spatial transformations, JANet is the key enabler in PointNet to achieve this geometric invariant functionality by adopting the T-Net and matrix multiplications.Recent research also demonstrates that JANet is an essential for boosting the performance of PointNet (Qi et al, 2017a) and thus widely applied in some safety-critical tasks (Paigwar et al, 2019;Aoki et al, 2019;Chen et al, 2021).Thirdly, 3DCertify can only work on l ∞ -norm metric, however, arguably, some researchers in the community regard other l p -norm metrics such as l 1 , l 2 -norm metrics are equally (if not more) important in the study of adversarial robustness (Boopathy et al, 2019;Weng et al, 2018).Thus, a robustness verification tool that can work on a wide range of l p -norm metrics is also worthy of a comprehensive exploration.
Thus, motivated by the aforementioned challenges yet to be resolved, this paper aims to design an efficient and scalable robustness verification tool that can handle a wide range of 3D models, including those with JANet structures under multiple l p -norm metrics including l ∞ , l 1 , and l 2 -norm.We achieve the verification efficiently by adapting the efficient layer-by-layer certification framework used in (Weng et al, 2018;Boopathy et al, 2019).Considering that these verifiers are designed for images and cannot be applied in larger-scale 3D point cloud models, we introduce a novel relaxation function of global max pooling to make it applicable and efficient on PointNet.Moreover, the multiplication layers in the JANet structure involves two variables under perturbations, which brings the cross-non-linearity.Due to the high dimensionality in 3D models, such cross-non-linearity results in significant computational overhead for computing a tight bound.Inspired by the recent advance of certification on transformers (Shi et al, 2020), we propose closed-form linear functions to bound the multiplication layer and combine forward and backward propagation to speed up the bound computation, which can be calculated in only O(1) complexity.In summary, the main contributions of this paper are listed as below.
• Our method can achieve an efficient verification.We design a relaxation algorithm to resolve the cross-non-linearity challenge by combining the 3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models forward and backwards propagation, enabling an efficient yet tight verification on matrix multiplications.• We design an efficient and scalable verification tool, 3DVerifier, with provable guarantees.It is a general framework that can verify the robustness for a wide range of 3D model architectures, especially it can work on complete and large-scale 3D models under l ∞ , l 1 , and l 2 -norm perturbations.• 3DVerifier, as far as we know, is one of the very few works on 3D model verification, which is more advanced than the existing work, 3DCertify, in terms of efficiency, scalability and tightness of the certified bounds.
2 Related Work  2014) first proposed the concepts of adversarial attacks and indicated that the neural networks are vulnerable to the well-crafted imperceptible perturbation.Xiang et al (2019) claimed that they are the first work to perform extensive adversarial attacks on 3D point cloud models by perturbing the positions of points or generating new points.Recent works extended the adversarial attacks for images to 3D point clouds by perturbing points and generating new points (Goodfellow et al, 2014;Zhang et al, 2019a;Lee et al, 2020).Additionally, Cao et al (2019) proposed the adversarial attacks on LiDAR systems.Wicker and Kwiatkowska (2019) used occlusion attack and Zhao et al (2020) proposed isometric transformations attack.Towards these adversarial attacks, corresponding defense techniques are developed (Zhang et al, 2019a;Zhou et al, 2019), which are more effective than adversarial training such as (Liu et al, 2019;Zhang et al, 2019b).However, Sun et al (2020) examined existing defense works and pointed out that those defenses are still vulnerable to more powerful attacks.Thus, Lorenz et al (2021) proposed the first verification algorithm for 3D point cloud models with provable robustness bounds.
Verification on Neural Networks: The robustness verification aims to find a guaranteed region that any input in this region leads to the same predicted label.For image classification, the region is bounded by a l p -norm ball with a radius of , and the aim is to maximize the .For point clouds models, we reformulate the goal to find the maximum that any distortion applied on the position of the point within the region cannot alter the predicted label.Numerous works have attempted to find the exact value in the image domain (Katz et al, 2017;Tjeng et al, 2018;Bunel et al, 2017).Yet these approaches are designed for small networks.Some works focus on computing a certified lower bound for the .To handle the non-linearity operations in the neural network, the convex relaxation is proposed to approximate the bounds for the ReLU layer (Salman et al, 2019).Wong and Kolter (2018) and Dvijotham et al (2018) introduced a dual approach to form the relaxation.Several studies computed the bounds via linear-by-linear approximation (Wang et al, 2018;Weng et al, 2018;Zhang et al, 2018) or abstract domain interpretation (Gehr et al, 2018;Singh et al, 2019).
While for the 3D point cloud models, only Lorenz et al (2021) proposed the 3Dcertify.However, their method is time expensive and can not handle the JANet in the full PointNet models.Thus, in this paper, we build a more efficient and general verification framework to obtain the certified bounds.

Overview
The clean input point cloud P 0 with n points can be defined as P0 = {p0 (i) | p0 (i) ∈ R 3 , i = 1, ..., n}, where each point p 0 (i) is represented in a 3D space coordinate (x, y, z).We choose the points that are correctly recognized by the classifier C as inputs to verify the robustness of C.
Throughout this paper, we will perturb the points by shifting their positions in the 3D space within a distance bounded by the l p -norm ball.Given the perturbed input ≤ , i = 1, ..., n , we aim to verify whether the predicted label of the model is stable within the region S p .This can be solved by finding the minimum adversarial perturbation min via binary search, such that ∃P ∈ Sp (P0, min ), argmax C(P) = c, where c =argmax C(P 0 ).Such min is also referred as the untargeted robustness.As for the targeted robustness, it can be interpreted as that the prediction output score for the true class is always greater than that for the target class.
Assuming that the target class is t, the objective function is: where y c represents the logit output for class c and y t is for the target class t.P is the set of points that is centred around the original points set P 0 within the l p -norm ball with radius .Thus, if σ > 0, the logit output of the true class c is always greater than the target class, which means that the predicted label cannot be t.Due to the fact that finding the exact output of σ is an 3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models NP-hard problem (Katz et al, 2017), the objective function of our work can be alternatively altered as computing the lower bound of σ .By applying binary search to update the perturbation , we can find the minimum adversarial perturbation.Equivalently, the maximum cert that does not alter the predicted label can be attained.Thus, in this paper, we aim to compute the certified lower bound of σ cert .

Generic Framework
To obtain the lower bound of σ for the l p -norm ball bounded model, we propagate the bound of each neuron layer by layer.As mentioned previously, most structures employed by the 3D point cloud models are similar to the traditional image classifiers, such as the convolution layer, batch normalization layer, and pooling layer.The most distinctive structure of the point clouds classifier, like the PointNet (Qi et al, 2017a), is the JANet structure.Thus, to compute the logit outputs of the neural network, our verification algorithm adopt three types of formulas to handle different operations: 1) linear operations (e.g.convolution, batch normalization and average pooling), 2) non-linear operation (e.g.ReLU activation function and max pooling), 3) multiplication operation.
Let Φ l (P) be the output of neurons in the layer l with point clouds input P. The input layer can be represented as Φ 0 (P) = P. Suppose that the total number of layers in the classifier is m, Φ m (P) is defined as the output of the neural network.In order to verify the classifier, we aim to derive a linear function to obtain the global upper bound u and lower bound l for each layer output Φ l (P) for P ∈ S p (P 0 , ).
The bounds are derived layer by layer from the first layer to the final layer.We have full access to all parameters of the classifier, such as weights W and bias b.To calculate the output for each neuron, we show below the linear functions to obtain the bounds of the l-th layer for the neuron in position (x, y) based on the previous l -th layer: where A L , B L , A U , B U are weight and bias matrix parameters of the linear function for lower and upper bound calculations respectively.A and B are initially assigned as identity matrix (I) and zeros matrix (0), respectively, to keep the same output of Φ l (x,y) when l = l .To calculate the bounds of current layer, we take backward propagation to previous layers.The Φ l (x+i,j) is substituted by the linear function of the previous layer recursively until it reaches the first layer (l = 0).After that, the output of each layer can be formed by a linear function of the first layer (Φ 0 (P) = P), as: Since the perturbation added to the point clouds input is bounded by the l p -norm ball, p ∈ S p (p 0 , ), to compute the global bounds we need to minimize the lower bound and maximize the upper bound in Eq. 3 over the input region.Thereby, the linear function to compute the global bounds for the l-th layer can be represented as: where A q is the l q -norm of A and 1/p + 1/q = 1 with p, q > 1, "U/L" denotes that the equations are formulated for the upper bounds and lower bounds, respectively.This generic framework resembles to CROWN (Zhang et al, 2018), which is widely utilised in the verification works to verify feed-forward neural networks (e.g.CNN-Cert (Boopathy et al, 2019), Transformer Verification (Shi et al, 2020)).Unlike existing frameworks based on CROWN, we further extend the algorithm to verify point clouds classifiers.

Functions for linear and non-linear operation
As the linear and nonlinear functions are basic operations in the neural network, we first adapt the framework given in (Boopathy et al, 2019) to 3D point cloud models.In section 3.4, we will present our novel technique for JANet.
Functions for linear operation.Suppose the output of the l -th layer, Φ l (P), can be computed by the output of the (l -1)-th layer, Φ l −1 (P), by the linear function Φ l (P) = W l * Φ l −1 (P) + b l , where W l and b l are parameters of the function in the layer l .Thus the Eq. 2 can be propagated from the layer l to the layer l − 1 by substituting Φ l (P).
Functions for basic non-linear operation.For the l -th layer with nonlinear operations, we apply two linear functions to bound Φ (l ) (P): Given the bounds of Φ (l −1) (P), the corresponding parameter α L , α U , β L , β U can be chosen appropriately.The functions to obtain the parameters are presented in Appendix A.

Functions for Multiplication Layer
The most critical structure in the PointNet model is the JANet, which contains the multiplication layers.For the multiplication, assume that it takes the output of previous layer (Φ l −1 (P)) and (l -r)-th layer (Φ l −r (P), r ∈ [1, l ]) as inputs, 3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models the output of Φ l (P) can be calculated by: Φ where d k is the dimension of Φ l −r (x,:) (P) and Φ l −1 (:,y) (P).As the Transformer contains multiplication operation in self-attention layers, inspired by the Transformer verifier (Shi et al, 2020), we proposed our algorithm for point clouds models.
In the JANet, before the multiplication layer, there is one reshape layer and one pooling layer.To simplify the computation, we choose the Φ l −1 as the output of pooling layer, by using h = d k * (k − 1) + y to represent the transformation in the reshape layer.The equation to compute Φ l −1 (P) can be rewritten as: To obtain the bounds of the multiplication layer, we utilize two linear functions of the input P to bound Φ l (P): where λ and Θ are new parameters of linear functions to bound the multiplication.
Theorem 1 Let lr and ur be the lower and upper bounds of the (l -r)-th layer output (Φ l −r (P), r ∈ [1, l ]); l 1 and u 1 be the lower and upper bounds of the (l -1)-th layer output Proof: see detailed proof in Appendix B.
The bounds and corresponding bounds matrix of Φ l −r (P) and Φ l −1 (P) can be calculated via back propagation.Given Theorem 1, the functions can be formed to compute the Λ (l ,0),U/L and Θ (l ,0),U/L for Φ l (x,y) (P) in Eq.5 as: Thereby, it is a forward propagation process by employing the computed bounds metrics of Φ (l −r) (P) and Φ (l −1) (P) to obtain the bounds of Φ (l ) (P).When it comes to the later layer, the l-th layer, we use the backward process to propagate the bounds to the multiplication layer, which can be referred as the l -th layer.Next, at the multiplication layer, we propagate the bounds to the input layer directly by skipping previous layers.The bounds that propagate to the multiplication layer (l -th layer) can be represented by the Eq. 2. Therefore, Φ l (P) can be substituted by the linear functions in Eq. 5 to obtain A (l,0),U/L and B (l,0),U/L : ).
Lastly, the global bounds of the l-th layer can be computed using the linear functions in Eq. 4.

A Running Numerical Example
To demonstrate the verification algorithm, we present a simple example network in Fig. 3 with two input points, p 1 and p 2 .Suppose the input points are bounded by a l ∞ -norm ball with radius , our goal is to compute the lower and upper bounds of the outputs (p 11 , p 12 ) based on the input intervals.As Fig. 3 shows, the neural network contains 12 nodes, and each node is assigned a weight variable.There are three types of operations in the example network: linear operation, non-linear operation (ReLU activation function) and multiplication.
Next, for the 3-rd layer output [p 7 , p 8 ]: p 7 = p 5 + p 6 , p 8 = p 6 , we assign We can back propagate constrains to the first layer to gain final global bounds of p 7 and p 8 : (7) 3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models Similarly, we compute the global bounds for p 7 and p 8 .For the multiplication layer, according to the formulations presented in section 3.4, to compute the bounds of p 9 in our example, we set a (8) Instead of performing back-propagation to the input layer, we calculate the bounds for p 9 and p 10 by directly replacing p 7 and p 8 with Eq.7: Again, according to Eq.4, we obtain l 9 = −2, u 9 = 7, l 10 = −1 and u 10 = 1.Finally, in our example, p 11 = p 9 and p 12 = p 10 − p 9 .After propagating the linear bounds to the previous layer by replacing the p 9 and p 10 with p 7 and p 8 in Eq. 8, we construct the constraints of the last layer as: By substituting p 7 and p 8 with p 1 and p 2 directly, the global bounds for p 11 and p 12 are: Thereby, as described so far, the back-substitution yields l 11 = −1, u 11 = 7, l 12 = −8, u 12 = 2.5, which are the final output bounds of our example network.
Robustness Analysis: The inputs p 1 = 1 and p 2 = 0 lead to output p 11 = 1 and p 12 = −1 in our example.Thus, to verify the robustness of our example network, we aim to find the maximum that guarantees p 11 ≥ p 12 always holds true for any perturbed inputs within the l ∞ -norm ball with a radius .In our example, the results of l 11 , u 11 , l 12 , and u 12 conclude that p 11 − p 12 ∈ [−3.5, 3] which results in p 11 ≥ p 12 failing to hold.Thus, we apply binary search to reduce the value of and recalculate the output bounds for the network based on the new .When the maximum iteration is reached, we stop the binary search and choose the maximum that results in the lower bound of p 11 − p 12 ≥ 0 as the final certified distortion.

Experiments
Dataset We evaluate 3DVerifier on ModelNet40 (Wu et al, 2015) dataset, which contains 9,843 training and 2,468 testing 3D CAD models.Each CAD model is used to sample a point cloud that comprises 2,048 points in three dimensions (Qi et al, 2017a).There are 40 categories of CAD models.We run verification experiments on point clouds with 1024, 512, and 64 points, and randomly selected 100 samples from all the 40 categories of the test set as the dataset to perform the experiments.All experiments are carried out on the same randomly selected dataset.
Models We utilize PointNet (Qi et al, 2017a) models as the 3D point cloud classifiers.Since the baseline verification method, 3DCertify (Lorenz et al, 2021), cannot handle the full PointNet with JANet, to make a comprehensive comparison, we first perform experiments on the PointNet without JANet.We then examine the performance of our 3DVerifier on full PointNet models.All models use the ReLU activation function.
Baseline (1) We choose the existing 3D robustness certifier, 3DCertify (Lorenz et al, 2021), as the main baseline for the PointNet models without JANet, which can be viewed as general CNN.Additionally, we also show the average distortions obtained by adversarial attacks on 3D point cloud models that extended from the CW attack (Carlini and Wagner, 2017;Xiang et al, 2019) and PGD attack (Kurakin et al, 2016).( 2) As for the complete PointNet proposed by (Qi et al, 2017a), we provide the average and minimum distortions obtained by the CW attack for robustness estimation.The PGD attack takes a long time to seek the adversarial examples and the attack success rate is below 10%, thus, it is not included as the comparative method.
Implementation The 3DVerifier is implemented via NumPy with Numba in Python.All experiments are run on a 2.10GHz Intel Xeon Platinum 8160 CPU with 512 GB memory.

Results for PointNet models without JANet
In Table 1, we present the clean test accuracy (Acc.) and average certified bounds (ave) for PointNet models without JANet.We also record the time to run one iteration of the binary search.We demonstrate that our 3DVerifier can improve upon 3DCertify in terms of run-time and tightness of bounds.To make the comparison more extensive, we train a 5-layer (N =10) model and a 9-layer (N =25) model, which include two multilayer perceptrons (MLPs) and three MLPs with one pooling layer, respectively.We also show the performance of models with different types of pooling layers: global average pooling and global maximum pooling.For experiments, we set the initial as 0.05 and the maximum iteration of binary search as 10.The experiments are taken on three point-cloud datasets with 64, 512, and 1024 points.We can see from the results of average bounds that our 3DVerifier can obtain tighter lower bounds than 3DCertify and engages a significant gap on the distortion bounds found by CW and PGD attacks.Table 1 shows that our method is much faster than the 3DCertify.Notably, 3DVerifier enables an orders-of-magnitude improvement in efficiency for large networks.Additionally, our method can compute certified bounds of distortion constrained by l 1 , l 2 , and l ∞ -norms, which is more scalable than 3DCertify in terms of norm distance.

Results for PointNet models with JANet
As the 3DCertify does not include the bounds computation algorithm for multiplication operation, it can not be applied to verify the complete PointNet with JANet architecture.Thus, as the first work to verify the point cloud models with JANet, we show the average and minimum distortion bounds of CW attack-based method to make comparisons.We examine N -layer models (N = 5,9,13) with two types of global pooling: average pooling and maximum pooling on datasets with 64, 512, and 1024 points.The obtained certified average bounds of full PointNet models are shown in Table 2.According to previous verification works on images (e.g.(Boopathy et al, 2019)) and results in Table 1, the gap between certified bounds and attack-based average distortion is reasonable, where the average minimum distortion are 10 times greater than the bounds.Thus, it reveals that our method efficiently certifies the point cloud models with JANet in a comparable quality with models without JANet.

Discussions
3DVerifier is efficient for large-scale 3D point cloud models.
There are two key features that enable 3DVerifier's efficiency for large 3D networks.
Improved global max pooling relaxation: the relaxation algorithm for global max pooling layer of CNN-Cert (Boopathy et al, 2019) framework cannot be adapted to 3D point cloud models directly, which is computationally unattainable.Thus, we proposed a linear relaxation for the global max pooling layer based on (Singh et al, 2019).For example, to find the maximum value p r = max m∈M (p m ), if exists p j such that its lower bound l j ≥ u k for all k ∈ M\j, the lower bound for the max pooling layer is l r = l j and upper bound is u r = u j .Otherwise, we set the output of the layer φ r ≥ p j , where j = argmax   layer enables 3DVerifier to compute the certified bounds much faster than the existing method, 3DCertify.
Combing forward and backward propagation: The verification algorithm proposed in (Shi et al, 2020) evaluated the effectiveness of combining forward and backward propagation to compute the bounds for Transformer with selfattention layers.Thus, in our tool 3DVerifier, we adapted the combining forward and backward propagation to compute the bounds for the multiplication layer.As Table 2 shows, the time spent for full PointNet models with JANet is nearly the same as the time for models without JANet.
CNN-Cert can be viewed as a special case of 3DVerifier.CNN-Cert (Boopathy et al, 2019) is a general efficient framework to verify the neural networks for image classification, which employs 2D convolution layers.Although our verification method shares a similar design mechanism as CNN-Cert, our framework is superior to CNN-Cert.One key difference is the dimension of input data, PointNet 3D models adopt 1D convolution layers, which can be efficiently handled by 3DVerifier.Additionally, besides the general operations such as pooling, batch normalization, and convolution with ReLU activation, we can also handle models with JANet that contains multiplication layers.Thus, 3DVerifier can tackle more complex and larger neural networks than CNN-Cert.To adapt the framework for 3D point clouds, we introduced a novel relaxation method for max-pooling layers to obtain its certified bounds, leading to a significant improvement in terms of efficiency.

The Power of JANet
We perform ablation study to show the importance of JANet in the point clouds classification task.We choose three models to train, and record the number of trainable parameters for each model.The final results of training accuracy are demonstrated in Table 3.The specific layer configurations for these models are presented in Appendix D. As the PointNet without JANet can be regarded as a general CNN model, while JANet is a complex architecture that contains T-Net and multiplication layer, to interpret the effect of JANet, we examine two models for the PointNet without JANet: 1) 7-layer model, which abandons the JANet in the 13-layer full PointNet; 2) 13-layer model, which intuitively adds the convolution and dense layers in T-Net to the 7-layer model.From the results, we can see that the PointNet with JANet can improve the training accuracy significantly comparing with PointNet without JANet when they employ the same number of layers.Therefore, JANet plays an important role to improve the performance of point clouds models.

Conclusion
In this paper, we proposed an efficient and general robustness verification framework to certify the robustness of 3D point cloud models.By employing the linear relaxation for the multiplication layer and combining forward and backward propagation, we can efficiently compute the certified bounds for various model architectures such as convolution, global pooling, batch normalization, and multiplication.Our experimental results on different models and point clouds with a different number of points confirm the superiority of 3DVerifier in terms of both computational efficiency and tightness of the certified bounds.3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models

B.1 derivation for the lower bound
Let the objective function be : To find the appropriate parameters, we need find the optimal value of a L , b L and c L to minimize the gap function G, minG L (p 1 , p 2 ).As the p 1 and p 2 are constrained by the area [l 1 , u 1 ] × [l 2 , u 2 ].The objective function can be reformulated as: First, we need to ensure that G L (p 1 , p 2 ) ≥ 0 by finding the minimum value of G(p 1 , p 2 ).The first-order partial derivatives of G L are: ∂G L ∂p1 = p 2 −a L , ∂G L ∂p2 = p 1 − b L .The critical points are ∂G L ∂p1 = 0 and ∂G L ∂p2 = 0, which leads to p 2 = a L and p 1 = b L .Then we form the Hessian matrix to justify the behavior of the critical points: Then we compute the Hessian: Thus, the critical points are saddle points, which can be illustrated in Figure B1.Thus, the minimum value of G L (p 1 , p 2 ) can be found on the boundary.As there exists a point (p 0 1 , p 2 ) = 0. Thereby, we need to satisfy constraints on G L (p 1 , p 2 ) that G L (p 0 1 , p 0 2 ) = 0 and the value of G L on boundary points: Equivalently, this can be formulated as: By substituting the c L in Eq.B2 with equation Eq.B6, we can obtain: ) where Λ = (u1−l1)(u2−l2)

2
. As the minimum value can only be found on the boundary, the p 0 1 can be l 1 or u 1 .• If p 0 1 = l 1 : By substituting it in Eq.B6 and Eq.B7 , we get: Next, replacing b L = l 1 , we can obtain: As a result, we obtain a L ≤ l 2 .To maximize Then, we conclude that a L ≥ u y , and As the F L are the same for above two cases, which is not determined by p 0 2 , we can choose p 0 2 = l 2 .Thus we can obtain the parameters to compute the lower bounds:

derivation for the upper bound
Similarly, to compute the upper bound, we can formulate the G U (p 1 , p 2 ) as: Thus, the objective function to compute the upper bound is: and then it can be expressed as: After substituting b U = l 1 , we get Thus, a U ≥ u 2 .To minimize F U (p 1 , p 2 ), we adapt a U = u 2 to get F U (p 1 , p 2 ) = Λ(l 1 l 2 + u 1 u 2 ).• If we set p 0 1 = l 1 : Thus, we can conclude that a U ≤ l 2 , and to minimize F U , we set a U = l 2 to form F U = Λ(l 1 l 2 + u 1 u 2 ).
As the two results for F U are the same, we choose p 0 2 = l 2 to obtain the final parameters for the upper bound:

Appendix C Extra Experiments
As computing the bounds by 3DCertify is computationally unattainable, it will terminate after verifying several samples.Thus, we present the results to verify 10 samples.From the table C1, we can see that our method could obtain tighter bounds and significantly reduce the time.

Fig. 1
Fig.1The abstract framework of the PonitNet with joint alignment network (JANet), where MLP stands for multi-layer perceptron.

Fig. 2
Fig. 2 Illustration of the combining forward and backward propagation process.The architecture in the MLP block contains convolution with ReLU activation function, batch normalization and pooling.Inside the MLP block, the bounds are computed by backward propagation.
Fig.3A running example for a simple neural network with multiplication.The inputs (p 1 , p 2 ) are bounded by a l∞-norm ball with radius . m∈M

(
l m ), and similarly φ r ≤ p k , where k = argmax m∈M (u m ).The comparison results in Table
Adversarial Attacks on 3D Point Clouds Classification: Szegedy et al (

Table 1
Average certified bounds (ave) and run-time on PointNet without JANet.For the certified bounds, the higher the better.The bounds obtained by attacks are referred as upper bounds.'*' means that computing the bounds by 3DCertify is computationally unattainable, which automatically terminates after verifying several samples.Thus, we present the results of verifying 10 samples in Appendix C.

Table 2
Average certified bounds (ave) and times on PointNet models with T-Net

Table 3
Training accuracy of different structures of point cloud models.
TableC1Average bounds and computational times on PointNet models without JANet.For the certified bounds, the higher the better.The bounds obtained by attack based method can be viewed as upper bounds.4Configuration of Various PointNet modelsBelow are the specific layer configurations for 13-layer full PointNet with JANet and PointNets without JANet.Configuration for full PointNet with JANet.Configuration for 7-layer PointNet without JANet.Configuration for 12-layer PointNet without JANet.