Abstract
In deep neural network compression, channel/filter pruning is widely used for compressing the pre-trained network by judging the redundant channels/filters. In this paper, we propose a two-step filter pruning method to judge the redundant channels/filters layer by layer. The first step is to design a filter selection scheme based on \(\ell _{2,1}\)-norm by reconstructing the feature map of current layer. More specifically, the filter selection scheme aims to solve a joint \(\ell _{2,1}\)-norm minimization problem, i.e., both the regularization term and feature map reconstruction error term are constrained by \(\ell _{2,1}\)-norm. The \(\ell _{2,1}\)-norm regularization plays a role in the channel/filter selection, while the \(\ell _{2,1}\)-norm feature map reconstruction error term plays a role in the robust reconstruction. In this way, the proposed filter selection scheme can learn a column-sparse coefficient representation matrix that can indicate the redundancy of filters. Since pruning the redundant filters in current layer might dramatically influence the output feature map of the following layer, the second step needs to update the filters of the following layer to assure output of feature map approximates to that of baseline. Experimental results demonstrate the effectiveness of this proposed method. For example, our pruned VGG-16 on ImageNet achieves \(4\times \) speedup with 0.95% top-5 accuracy drop. Our pruned ResNet-50 on ImageNet achieves \(2\times \) speedup with 1.56% top-5 accuracy drop. Our pruned MobileNet on ImageNet achieves \(2\times \) speedup with 1.20% top-5 accuracy drop.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In the past few years, we have witnessed a rapid development of convolutional neural networks [1,2,3,4,5,6,7]. In order to achieve higher accuracy, the general strategy is to make deeper and more complicated networks [8,9,10,11,12]. However, these strategies to improve accuracy are not efficient with respect to model size and speed. In many mobile terminal devices such as robotics, self-driving car and augmented reality, the recognition tasks need to be carried out in a timely fashion on a computationally limited platform [13,14,15,16].
There has been rising interest in building small and efficient neural networks in the recent literature [17,18,19,20,21,22,23,24,25,26,27,28]. Many different approaches can be generally categorized as two groups: (1) training small networks directly [22,23,24,25,26,27, 29]; (2) compressing pre-trained networks [17,18,19,20,21, 28, 30].
The former aims to train a small network structure [22,23,24,25,26,27, 29], where the popular method is MobileNets including three versions. More specifically, MobileNet-V1 adopts the depthwise separable convolution to greatly reduce the amount of computation and the number of parameters, thereby improving the computation efficiency. Based on MobileNet-V1, MobilieNet-V2 introduces the inverted residual structure with a linear bottleneck. Based on MobileNet-V1 and MobileNet-V2, MobileNet-V3 is proposed recently. However, the above MobileNets do not consider the case of redundant filters. In fact, the redundancy in the filters takes up the computation in the process of forward and back propagation. Generally speaking, convolutional neural network has the high redundancy of filters [8, 31]. Therefore, it would reduce the running time of neural networks by removing the redundant filters.
The latter aims to compress the pre-trained convolutional neural network (CNN), where the popular method is pruning. Pruning includes parameter pruning and channel/filter pruning. For most CNNs, convolutional layers are the most time-consuming part, while fully connected layers involve massive network parameters. Therefore, the parameter pruning aims to reduce the storage, while the channel/filter pruning aims to reduce the computation cost. Generally, parameter pruning may suffer from the irregular memory acquisition and eliminates the possibility of improving efficiency. Therefore, special hardware or software is needed to assist with the calculation, which may increase computation time [19, 32,33,34]. To avoid the limitations of parameter pruning mentioned above, this paper focuses on studying the channel/filter pruning by removing the entire channels/filters [12, 18,19,20,21, 35, 36], whose benefits of removing the redundant channels/filters can be seen from [12, 35, 36]. Lebedev and Lempitsky [18], Wen et al. [19] employ the group sparsity to select the redundant filters, but the bad convergence speed and structured filter generating speed will heavily influence the pruning efficiency. Max response [20] uses the \(\ell _{1}\)-norm to calculate the sum of its absolute weights of a filter, and the high absolute weight sum means that the filter is important. Since max response measures the importance of filter one by one, it may ignore the correlations between different filters. To this end, channel pruning [21] aims to use \(\ell _{1}\)-norm to indirectly select the redundant filters by using the feature map of current layer and the filter of the next layer to reconstruct the feature map of the next layer, which needs to solve a lasso problem and thus has a high computation complexity in terms of optimal solution.
In this paper, we propose a two-step feature map reconstruction method to prune the redundant filters and channels. In the proposed method, both the reconstruction term and the regularization term employ the \(\ell _{2,1}\)-norm to implement the learning task of filter pruning under the robust reconstruction. To the best of our knowledge, we are the first one to propose a filter pruning method based on two-step feature map reconstruction, where robust reconstruction and filter selection are simultaneously performed. Unlike most of filter pruning methods, our method is able to select the representative filters by two-step feature map reconstruction, so that the removed filters would not influence the following layers.
The remainder of this paper is organized as follows: In Sect. 2, we present the background. In Sect. 3, we present the proposed method and its optimal solution. In Sect. 4, we give the theoretical analysis of our method. In Sect. 5, we perform the experiments to demonstrate the effectiveness and efficiency of our method. Finally, a conclusion is drawn in Sect. 6.
2 Background
To prune a feature map with \(n_{i}\) channels, \(n_{i+1}\times n_{i}\times k_{h}\times k_{w}\) convolutional filters \({\varvec{W}}\) are often applied on \(N\times n_{i}\times k_{h}\times k_{w}\) input volumes \({\varvec{X}}\) sampled from this feature map of i-th layer, which produces \(N\times n_{i+1}\) output matrix \({\varvec{Y}_{i+1}}\). Here, N is the number of samples, \(n_{i+1}\) is the number of output channels, and \(k_{h}\), \(k_{w}\) are the kernel size. For simple representation, bias term is ignored in the filter pruning methods. To prune the input channels from \(n_{i}\) to desired \(n_{i}^{'}\) (\(0\le n_{i}^{'} \le n_{i}\)), while minimizing reconstruction error, the channel pruning method [21] is proposed as follows:
\({{\left\| \cdot \right\| }_{F}}\) is Frobenius norm. \({\varvec{X}_{c}}\) is \(N\times k_{h}\times k_{w}\) matrix sliced from c-th channel of input volumes \({\varvec{X}}\), \(c=1,2,\ldots ,n_{i}\). \({\varvec{W}_{c}}\) is \(n_{i}\times k_{h}\times k_{w}\) filter weights sliced from c-th channel of \({\varvec{W}}\). \({\varvec{\beta }}\) is coefficient vector of length \(n_{i}\) for channel selection, and \(\beta _{c}\) (c-th entry of \({\varvec{\beta }}\)) is a scalar mask to c-th channel (i.e., to drop the whole channel or not).
Similar to the above channel pruning method [21], some other filter-level pruning methods [12, 20, 30, 35] also have been explored. The core of the filter pruning is to measure the importance of each filter. The major difference of filter pruning is the selection strategy: Max response [20] calculates the absolute weight sum of each filter (i.e., \({\sum {\varvec{W(i,:,:,:)}}}\), where i means the i-th filter, \(i\in \{1,2,\ldots ,n_{i+1}\}\)) as its importance score. ThiNet [12, 35] first uses a greedy strategy to search a subset of feature map such that the output by some channels is almost same with that by all the channels. More specifically, ThiNet aims to search a subset of feature map by minimizing the following reconstruction error.
where d is the sampling number, r is the compression ratio, S is the subset of feature map-based channels, and |S| is the number of elements in a subset S.
After obtaining the subset S, the redundant channels of feature map \({\varvec{X}_{c}^{d}}\) and filter \({\varvec{W}_{c}^{d}}\) are removed. For simplicity, we call the feature map and filter without redundancy as \({\varvec{\hat{X}}_{c}^{d}}\) and \({\varvec{\hat{W}}_{c}^{d}}\). ThiNet further minimizes the reconstruction error by assigning weights \(\varvec{q}\) for \({\varvec{\hat{W}}_{c}^{d}}\).
It is worth noting that both channel pruning method and ThiNet method are driven by data to demonstrate the effectiveness of filter selection strategy, and first k and max response are non-data-driven methods. Besides, HRank [30], as a data-driven method, is proposed as follows:
where K means the number of convolutional layers, \(n_{i}\) represents the number of filters in the i-th convolutional layer, \(\delta _{ij}\) is an indicator which is 1 if the j-th filter in the i-th layer (i.e., \(\varvec{w_{j}^{i}}\)) is unimportant or 0 if \(\varvec{w_{j}^{i}}\) is important, g means the number of input images, \(o_{j}^{i}(t,:,:)\) means the feature map generated by \(\varvec{w_{j}^{i}}\), and \(n_{i2}\) means the number of least important filters in the i-th layer.
3 Building model of filter pruning
Formally, for one input image, let \(n_i\) denote the number of input channels for the i-th convolutional layer and \(h_i\), \(w_i\) be the height and width of the input feature maps. The convolutional layer transforms the input feature map \(\varvec{y}_{i}\in \mathbb {R}^{n_i\times h_i \times w_i}\) into the output feature map \(\varvec{y}_{i+1}\in \mathbb {R}^{n_{i+1}\times h_{i+1} \times w_{i+1}}\), which are used as input feature maps for the next convolutional layer. This is achieved by applying \(n_{i+1}\) 3D filters \({\varvec{F}_{i,j}}\in \mathbb {R}^{n_{i}\times k \times k}\) (All the filters, together, constitute the filter matrix \({\varvec{F}_{i+1}}\in \mathbb {R}^{n_{i+1}\times n_{i}\times k \times k}\)) on the \(n_i\) input channels, in which one filter generates one feature map channel. The number of operations of the convolutional layer is \(n_{i+1}n_ik^{2}h_{i+1}w_{i+1}\). If a filter \({\varvec{F}}_{i,j}\) is pruned, its corresponding feature map \(x_{i+1,j}\) is removed, which reduces \(n_{i}k^{2}h_{i+1}w_{i+1}\) operations. The filters that apply on the removed feature map channels from the filters of the next convolutional layer are also removed, which saves an additional \(n_{i+2}k^{2}h_{i+2}w_{i+2}\) operations.
Furthermore, if there are m input images, they will produce the feature map, such as the i-th feature map \(\varvec{y}_{i}\in \mathbb {R}^{m\times n_i\times h_i \times w_i}\), and the \(i+1\)-th feature map \(\varvec{y}_{i+1}\in \mathbb {R}^{m\times n_{i+1}\times h_{i+1} \times w_{i+1}}\). For simplicity, we sample from \(\varvec{y}_{i}\) and generate \({\varvec{Y}_{i}}\in \mathbb {R}^{N_{i}\times n_{i}}\). The detailed sampling way can refer [21]. Here, \(N_{i}\) is the number of samples of i-th layer, and \(n_{i}\) is the channel number of feature map of i-th layer. To prune the output channels from \(n_i\) to desired \(n_i^{'}\), while minimizing reconstruction error, we formulate the proposed objective function as follows:
where \({\varvec{Y}_i}\) is \(N_{i}\times n_{i}\) matrix. \({\varvec{A}}\in \mathbb {R}^{n_{i}\times n_{i}}\) is a coefficient representation matrix. The designed objective function can make \({\varvec{A}}\) be column-sparse, and thus, it can indicate the redundancy of channels of feature map and filters in current layer (see Fig. 1).
Without loss of generality, we remove the layer index i, and thus, our objective function can be rewritten as follows:
Using some mathematical techniques, problem (6) can be rewritten as
where \({\varvec{W}_2} \in \mathbb {R}^{n\times n}\) and \({\varvec{W}_1} \in \mathbb {R}^{N\times N}\) are two diagonal matrices, whose diagonal elements are \({\varvec{W}_2^{cc}=\frac{1}{2{{\left\| {(\varvec{A})^{c}} \right\| }_{2}}}}\) and \({\varvec{W}_1^{cc}=\frac{1}{2{{\left\| ((\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})-\varvec{A}(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T}))^{c} \right\| }_{2}}}}\), respectively. \({(\varvec{A})^{c}}\) means the c-th column of matrix \({\varvec{A}}\). When \({{\left\| {(\varvec{A})^{c}} \right\| }_{2}=0}\), we let \({\varvec{W}_2^{cc}=\frac{1}{2{{\left\| {(\varvec{A})^{c}} \right\| }_{2}}+\zeta }}\). (\(\zeta \) is a very small constant.) Similarly, when \({{\left\| ((\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})-\varvec{A}(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T}))^{c} \right\| }_{2}}\) \(=0\), we let \({\varvec{W}_1^{cc}=\frac{1}{2{{\left\| ((\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})-\varvec{A}(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T}))^{c} \right\| }_{2}}+\zeta }}\). In this way, the smaller \({\varvec{W}_1^{cc}}\) is, the higher possibility to be outliers the c-th response has. The smaller \({\varvec{W}_2^{cc}}\) is, the more important the c-th filter is. Here, \({\sqrt{{\varvec{W}_{1}}}}\) gives the weights of the responses. The clean responses are weighted more heavily, while the responses that are outliers are weighted less heavily. This leads to the robustness of our method to outliers. On the other hand, the regularization term \({\varvec{A}\sqrt{{\varvec{W}_{2}}}}\) can guide the selection of filters. Through adjusting the parameter \(\lambda \), our method can select the effective filters under the robust reconstruction criterion. Moreover, it can be seen that the minimization of \({2tr((\varvec{Y}^{T}-(\varvec{A}\varvec{Y}^{T}+\varvec{b}\varvec{1}^{T}))\varvec{W}_{1}(\varvec{Y}^{T}-(\varvec{A}\varvec{Y}^{T}+\varvec{b}\varvec{1}^{T}))^{T}}\) \({+2\lambda tr(\varvec{A}{\varvec{W}_{2}}\varvec{A}^{T})}\) forces \({{\left\| ((\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})-\varvec{A}(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T}))^{c} \right\| }_{2}}\) and \({{\left\| (\varvec{A})^{c} \right\| }_{2}}\) to be very small when \({\varvec{W}_2}\) and \({\varvec{W}_1}\) are large. Finally, some columns of \({(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})-\varvec{A}(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})}\) and \({\varvec{A}}\) may be close to zero, and thus, a column-sparse \({(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})-\varvec{A}(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})}\) and \({\varvec{A}}\) can be obtained.
Our goal is to remove some redundant output channels without the loss of the performance. After we design an algorithm to judge the redundant channels and filters and then prune them, we should assure that the feature map of next layer is almost kept so that the removed channels does not influence the final classification result. Therefore, we need to reconstruct the filters in next layer with current remaining channels by linear least squares, whose objective function is shown as follows:
where \({\varvec{Y}_{i+1}}\) means the feature map of \(i+1\)-th layer, \({\varvec{Y}_{i}^{'}}\) means the feature map of i-th layer after the removal of redundant channels, and \({\varvec{F}_{i+1}^{'}}\) means the filters of \(i+1\)-th layer after the removal of redundant channels. Here, \({\varvec{F}_{i+1}^{'}}\) is \(n_{i+1}\times n_{i}kk\) reshaped \({\varvec{F}_{i+1}}\). It is worth noting that if r channels are redundant in \({\varvec{Y}_{i}}\), \({\varvec{Y}_{i}^{'}\in \mathbb {R}^{N\times (n_{i}-r)}}\), \({\varvec{F}_{i+1}^{'}\in \mathbb {R}^{n_{i+1}\times (n_{i}-r)}}\).
To sum up, the flowchart is given in Fig. 2, which mainly includes two steps: One is to judge the redundant filters by reconstructing the feature map of the current layer, and the second step is to learn the new filters by reconstructing the feature map of the next layer. Our method is proceeded layer by layer. For one layer such as \(i+1\)-th layer, the original computation cost is \(n_{i+1}n_{i}k^{2}h_{i+1}w_{i+1}\) flops, while the remained computation cost is \((n_{i+1}-r_f)(n_{i}-r_{c})k^{2}h_{i+1}w_{i+1}\) flops.
Discussion: Some recent works [20, 21] also introduce the sparse norm, such as \(\ell _{1}\)-norm [20] or Lasso [21]. However, we must emphasize that we use different formulations and different ideas. Lasso [21] uses the current filters and the previous feature map to reconstruct the feature map of current layer and add the sparse constraint on each channel, but the computation complexity of their model is very high. Moreover, both of them [20, 21] need to give the value of sparsity \(n_{i}^{'}\). Different from Lasso, we perform robust reconstruction for the feature map of current layer. If the feature map has the redundancy, our model can automatically conclude the redundant filters of its previous layer. Furthermore, we need to assure that the remaining filters can recover the feature map of next layer. Besides, they [20, 21] use \(\ell _{1}\)-norm to select the redundant channel, while we use \(\ell _{2,1}\)-norm to select the redundant channel from the perspective of feature map of current layer.
3.1 The optimal solution of problem (6)
The global optimal solution of problem (7) can be easily obtained by using an iterative re-weighting method, which includes the following two steps.
Step 1: Given \({\varvec{A}}\), we compute \(\varvec{b}\). The optimization problem (6) becomes,
Setting the derivative of (9) with respect to \({\varvec{b}}\) to be zero, we get \({\varvec{v}=\frac{(\varvec{X}{\varvec{W}_1}-\varvec{A}\varvec{X}{\varvec{W}_1})\varvec{1}}{\varvec{1}^{T}\varvec{W}_1 \varvec{1}}}\).
Step 2: Given \(\varvec{b}\), we compute \({\varvec{A}}\). The optimization problem (7) becomes,
Setting the derivative of (10) with \({\varvec{A}}\) to be zero, we get \(\varvec{A}=(\varvec{Y}-\varvec{b}\varvec{1}^{T})\varvec{W}_1(\varvec{Y}-\varvec{b}\varvec{1}^{T})^{T}(\lambda \varvec{W}_2+(\varvec{Y}-\varvec{b}\varvec{1}^{T})\varvec{W}_1(\varvec{Y}{-\varvec{b}\varvec{1}^{T})^{T})^{-1}}\).
Iterating the above two steps will reach the global optimal solution. Algorithm 1 gives more details.
Algorithm 1. Optimization Algorithm of Problem (6) | |
---|---|
Input: Feature map \(\varvec{Y}\), parameter \(\lambda \); | |
1: Initialize \({\varvec{W}_{1}}=\varvec{I}\), \({\varvec{W}_{2}}=\varvec{I}\) and \(\varvec{b}=\varvec{0}\); | |
2: while not converge do | |
2.1: Compute \(\varvec{A}=(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})\varvec{W}_1(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})^{T}\) | |
\((\lambda \varvec{W}_2+(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})\varvec{W}_1(\varvec{Y}^{T}-\varvec{b}\varvec{1}^{T})^{T})^{-1}\); | |
2.2: Compute \(\varvec{b}=\frac{\varvec{Y}^{T}{\varvec{W}_1}\varvec{1}}{\varvec{1}^{T}\varvec{W}_1 \varvec{1}}\); | |
2.3: Compute \({\varvec{W}_{1}}\) | |
2.4: Compute \(\varvec{W}_2\) | |
end while | |
Output: Representation matrix \(\varvec{A}\), optimal mean vector \(\varvec{b}\). |
4 Theoretical analysis
4.1 Convergence analysis
Before giving the convergence proof of the optimization algorithm, we need to first give Lemma 1 [37].
Lemma 1
For any nonzero vectors \({\varvec{U}},\varvec{q}\in {\mathbb {R}^{d}}\),
Based on Lemma 1, we prove Theorem 1.
Theorem 1
Algorithm 1 will monotonically decrease the value of the objective function of the optimization problem (7) in each iteration and converge to a local optimal solution.
Proof
For simplicity, we denote the updated \(\varvec{b}\) and \({\varvec{A}}\) by \(\widetilde{\varvec{b}}\) and \({\widetilde{\varvec{A}}}\). Since the updated \(\widetilde{\varvec{b}}\) and \({\widetilde{\varvec{A}}}\) are the optimal solution of problem (5), according to the definition of \({\varvec{W}_1}\) and \({\varvec{W}_2}\), we have
On the one hand, according to Lemma 1, we have
Using matrix calculus for problem (13), we have the following formulation:
On the other hand, according to Lemma 1, we have
Similarly, using matrix calculus for problem (15), we have the following formulation:
By combining problem (12) and problem (14) with problem (16), we have
Since problem (5) has an obvious lower bound 0, the optimization problem (5) converges to the global optimal solution.
4.2 Computational complexity analysis
The main computational complexity of Problem (6) has two steps in each iteration: The first step is to compute \(\varvec{b}\), whose computational complexity is \(O(n^3)\); The second step is to compute \({\varvec{A}}\), whose computational complexity is also \(O(n^3)\) at most. Therefore, the computational complexity of one iteration will be up to \(O(n^3)\). If Algorithm 1 needs t iterations, the total computational complexity is on the order of \(O(t n^3)\).
5 Experiments
We prune the filters of three types of networks, i.e., VGG-16 [6], ResNet-50 [38] and MobileNet [22], which is implemented on ImageNet [39], CIFAR-10 [40] and CIFAR-100 [40]. ImageNet comprises 1.28 million training images and 50000 validation images from 1000 classes. We fine-tune networks on the training set and report the accuracy on the validation set with the shorter side of images resized to 256. For data augmentation, we follow the standard practice [21] and perform the random size cropping to 224\(\times \)224 and random horizontal flipping, and more experimental details can refer [21]. CIFAR-10 consists of 10 classes images, and each class consists of 6000 images, where 50000 images are for training and 10000 for validation. Similarly, CIFAR-100 consists of 100 classes images and each class consists of 600 images, where 50000 images are for training and 10000 for validation. On CIFAR-10 and CIFAR-100 datasets, we fine-tune networks with the size of training images which are resized to 32x32, and with the per-pixel mean subtracted on the training and validation set. For data augmentation, we adopt random horizontal flipping.
Our method is compared to the classical first-k and max response [20], the state-of-the-art channel pruning [21], Thin Net [35] and HRank [30] that are similar to our method to some extent.
Implementation: Our method is performed on the network layer by layer. In our method, there is a parameter \(\lambda \), and thus, our method involves in the process of parameter selection. More specifically, when our method performs parameter selection on a layer, the other layers are fixed as baseline. One common way of parameter determination is the grid searching method. We vary this parameter within a certain range \(\{10^{-6},\) \(10^{-5},\ldots ,10^{6}\}\), and the value of parameter with the highest classification result is considered as the most optimal parameter. After the parameters of our method on all the layers of a network are determined, the compressed network is obtained. Theoretically, if the redundant filters of i-th layer are removed, the updated filter of the \(i+1\)-th layer almost can recover the feature map of the \(i+1\)-th layer, and thus, the final classification performance can be preserved. For a fair comparison, all the methods adopt the same speedup ratio. For example, all the methods use the 2 times of speedup ratio (i.e., \(2\times \)) on the ResNet-50. More specifically, based on the 2 times of speedup ratio on the ResNet-50, the effective filters number is first acquired by channel pruning method [21], and then, the same effective filters number acquired by channel pruning method [21] is adopted by all the other methods including ours.
5.1 VGG-16 pruning
5.1.1 Experimental results of single layer
We implement three methods to compress VGG-16 network, and the experimental results are shown in Fig. 3. It can be seen from Conv2_1 layer that, with the increase in speedup ratio, the classification accuracy of three methods drops dramatically. However, our method outperforms the other methods when the speedup ratio is from \(2\times \) to \(4\times \), where \(2\times \) means that the running time of compressed network is 0.5 times of that of baseline network. At this moment, the classification accuracy drops about from 0.1 to \(0.84\%\) compared to the classification accuracy of baseline. More specifically, compared to baseline, when the classification accuracy drops \(0.84\%\), our method has a \(4\times \) speedup ratio (i.e., our flops are \(25\%\) of baseline).
5.1.2 Experimental results across all layers
Guided by the experimental results of single layer, we observe that there is a big redundancy on the first several layers of VGG-16, while its last layers are not very redundant. To this end, we prune more filters on shallow layers while remaining the origin filters on conv5_x layers, and the detailed filter pruning case can refer channel pruning [21]. The experimental results without fine-tuning are shown in Fig. 4, which shows that three methods obtain the similar results when the speedup ratio is small. With the increase in speedup ratio, the advantage of our method is highlighted. The experimental results with fine-tuning are shown in Tables 1, 2 and 3. It can be seen that, with the same speedup ratio (thus the flops (i.e., flops) and parameters (i.e., #Param) of all the methods are same), our method outperforms the other methods with respect to top-1 classification accuracy.
5.2 ResNet pruning
We also apply the pruning methods on ResNet that is a multi-path network. The structure is more complex than VGG-16. Through the experiments of single layers, we observe that there is a big redundancy on the shallow layers. To this end, we prune the branch2a and branch2b layers but remain the branch2c layer in this network. The detailed filter pruning case on ResNet-50 can refer channel pruning [21]. We compared the four pruning methods (i.e., first k, max response [20], channel pruning [21], ThiNet [35] and HRank [30]) with \(2\times \) speedup ratio, and the experimental results are shown in Tables 4, 5 and 6, where the negative value means the accuracy is higher than baseline. It can be seen that with the same speedup ratio, our method is comparable with channel pruning, ThiNet and HRank, but outperforms the classical first k and max response methods. For example, in Table 4, our method obtains the better top-1 classification accuracy than first k, max response, channel pruning and HRank, but worse than ThiNet. However, we obtain the better top-5 classification accuracy than ThiNet. Therefore, we say our method is comparable with ThiNet. In Table 6, our method obtains the worse top-1 classification accuracy than channel pruning and HRank, while we obtain the better top-5 classification accuracy than HRank. Therefore, we say our method is comparable with HRank.
5.3 MobileNet pruning
As a lightweight network, MobileNet does not have a high degree of redundancy. The detailed filter pruning case on MobileNet can refer the strategy used in channel pruning [21]. We compared the four pruning methods (i.e., first k, max response [20], channel pruning [21], ThiNet [35] and HRank [30]) with \(1.5\times \) speedup ratio. The experimental results in Table 7 show that our method outperforms the other methods with the same speedup ratios.
6 Conclusion
In this paper, we propose a two-step feature map reconstruction method to prune the redundant filters and channels, which is used to compress the CNN networks, such as VGG-16, ResNet-50 and MobileNet. The experimental results on different networks with different datasets show the effectiveness of our method.
References
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)
Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1(2), 270–280 (1989)
Jia, Y., Shelhamer, E., Donahue, J.: Caffe: Convolutional architecture for fast feature embedding. In: ACM MM, pp. 675–678 (2014)
Ciresan, D.C., Meier, U., Masci, J.: Flexible, high performance convolutional neural networks for image classification. In: IJCAI, p. 1237 (2011)
Szegedy, C., Liu, W., Jia, Y.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Taigman, Y., Yang, M., Ranzato, M.A.: Deepface: Closing the gap to human-level performance in face verification. In: CVPR, pp. 1701–1708 (2014)
Shang, W., Sohn, K., Almeida, D.: Understanding and improving convolutional neural networks via concatenated rectified linear units. In: ICML, pp. 2217–2225 (2016)
Chatfield, K., Simonyan, K., Vedaldi, A.: Return of the devil in the details: delving deep into convolutional nets. arXiv preprint arXiv:1405.3531 (2014)
Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp. 2278–2324 (1998)
Luo, J.H., Wu, J., Lin, W.: Thinet: A filter level pruning method for deep neural network compression. In: ICCV, pp. 5058–5066 (2017)
Liu, X., Zhou, Y., Zhao, J., Yao, R., Liu, B., Ma, D., Zheng, Y.: Multiobjective ResNet pruning by means of EMOAs for remote sensing scene classification. Neurocomputing 381, 298–305 (2020)
Zou, J., Rui, T., Zhou, Y., Yang, C., Zhang, S.: Convolutional neural network simplification via feature map pruning. Comput. Electr. Eng.. 70, 950–958 (2018)
Singh, P., Kadi, V.S.R., Namboodiri, V.P.: Convolutional neural network simplification via feature map pruning. Image Vis. Comput. 93, 1–14 (2019)
Mao, Y., He, Z., Ma, Z., Tang, X., Wang, Z.: Efficient convolution neural networks for object tracking using separable convolution and filter pruning. IEEE Access. 7, 106466–106474 (2019)
Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. In: CVPR, pp. 1–12 (2014)
Lebedev, V., Lempitsky, V.: Fast convnets using group-wise brain damage. In: CVPR, pp. 2554–2564 (2016)
Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: NIPS, pp. 2074–2082 (2016)
Li, H., Kadav, A., Durdanovic, I., Samet, H.: Pruning filters for efficient convnets. In: ICLR, pp. 1–13 (2016)
He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: ICCV, pp. 1–10 (2017)
Howard, A.G., Zhu, M., Chen, B.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. In: CVPR, pp. 1–9 (2017)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: Mobilenetv 2: Inverted residuals and linear bottlenecks. In: CVPR, pp. 1–14 (2018)
Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and<1mb model size. In: CVPR, pp. 1–5 (2016)
Jin, J., Dundar, A., Culurciello, E.: Flattened convolutional neural networks for feedforward acceleration. In: ICLR, pp. 1–11 (2015)
Rastegari, M., ordonez, V., Redmon, J., Farhadi, A.: Xnornet: Imagenet classification using binary convolutional neural networks. In: CVPR, pp. 1–17 (2016)
Wang, M., Liu, B., Foroosh, H.: Factorized convolutional neural networks. In: CVPR, pp. 1–10 (2016)
Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: CVPR, pp. 4820–4828 (2016)
Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: CVPR, pp. 6848–6856 (2018)
Lin, M., Ji, R., Wang, Y., Zhang, Y., Zhang, B., Tian, Y., Shao, L.: HRank: Filter pruning using high-rank feature map. In: CVPR (2020)
Misha, D., Babak, S., Laurent, D.: Predicting parameters in deep learning. In: NIPS, pp. 2148–2156 (2013)
Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep convolutional neural networks. ACM J. Emerg. Technol. Comput. Syst. 13(3), 1–18 (2017)
Yang, C., Yang, Z., Khattak, A.M., Yang, L., Zhang, W., Gao, W., Wang, M.: Structured pruning of convolutional neural networks via L1 regularization. IEEE Access. 7, 106385–106394 (2019)
Garg, I., Panda, P., Roy, K.: A low effort approach to structured CNN design using PCA. IEEE Access. 8, 1347–1360 (2020)
Luo, J., Zhang, H., Zhou, H., Xie, C., Wu, J., Lin, W.: Thinet: pruning CNN filters for a thinner net. IEEE Trans. Pattern Anal. Mach. Intell. 41(10), 2525–2538 (2018)
Yang, W., Jin, L., Wang, S., Cu, Z., Chen, X., Chen, L.: Thinning of convolutional neural network with mixed pruning. IET Image Proc. 13(5), 779–784 (2019)
Zheng, Z.: Sparse locality preserving embedding. In: CISP, pp. 1–5 (2009)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Li, F.: Imagenet: A large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)
Acknowledgements
This research was supported by the Shenzhen Research Council (KJYY20170724152625446), by the National Natural Science Foundation of China (Grant Nos. 61871154, 61906124, 61906103, 62031013), by the Chinese Postdoctoral Science Foundation (Grant No. 2018M630158) and by the Scientific Research Platform Cultivation Project of SZIIT (PT201704).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Liang, Y., Liu, W., Yi, S. et al. Filter pruning-based two-step feature map reconstruction. SIViP 15, 1555–1563 (2021). https://doi.org/10.1007/s11760-021-01888-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-021-01888-4