Abstract
Capsule Networks (CapsNets) are a generation of image classifiers with proven advantages over Convolutional Neural Networks (CNNs). Better robustness to affine transformation and overlapping image detection are some of the benefits associated with CapsNets. However, CapsNets cannot be classified as a resource-efficient deep learning architecture due to the high number of Primary Capsules (PCs). In addition, CapsNets’ training and testing are slow and resource hungry. In this chapter, we propose two methods to reduce PCs to make CapsNet resource-efficient. In our first approach, we introduce Light and Enhanced Capsule Network (LE-CapsNet). In LE-CapsNet we modify the CapsNet architecture by introducing the Primary Capsule Generator (PCG) module. We further compress this network by optimizing the feature extraction and introduce LE-CapsNet-T as a tiny variant of the network. Using 3.8M weights, LE-CapsNet obtains 77.21% accuracy for the CIFAR-10 dataset while performing inference 4x faster. In our second approach, we investigate the possibility of pruning PCs in CapsNet. We show that a pruned version of CapsNet performs up to 9.90x faster than the conventional architecture by removing 95% of Capsules without loss of accuracy. Also, our pruned architecture saves on more than 95.36% of floating-point operations in the dynamic routing stage of the architecture. Moreover, we provide insight into why some datasets benefit significantly from pruning while others fall behind.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The choice of dataset impacts these numbers as feature extraction results in extracting different amount of information (a different feature map size) for different datasets and based on the input image size.
- 2.
- 3.
Note that in the interest of a fair comparison, we implemented LE-CapsNet in the PyTorch version of CapsNet. CapsNet and CFC-CapsNet are implemented using the same framework. In addition, to include MS-CapsNet as a light and accurate alternative to CapsNet in our comparison, we implemented and tested MS-CapsNet using the same framework. Also, it is noteworthy that DA-CapsNet does not provide a code for their work and we could not verify their numbers.
- 4.
For the MLCN architecture, we report the number of parameters for the variant with the highest accuracy.
- 5.
Note that MS-CapsNet uses 13\(\,\times \,\)13 kernels for the first convolutional layer. We evaluated this network using the PyTorch implementation that resulted in a very low accuracy. Therefore, we decided to use the kernel size used in other datasets (9\(\,\times \,\)9 kernel) for CIFAR-10 as well. This explains why we require a high number of parameters (in contrast to what authors state [2]) for CIFAR-10.
- 6.
References
Zhao, B., Feng, J., Xiao, W., Yan, S.: A survey on deep learning-based fine-grained object classification and semantic segmentation. Int. J. Autom. Comput. 14(2), 119–135 (2017)
Xiang, C., Zhang, L., Tang, Y., Zou, W., Chen, X.: MS-CapsNet: a novel multi-scale capsule network. IEEE Signal Process. Lett. 25(12), 1850–1854 (2018)
Sabour, S., Frosst, N., Hinton, G.E.: Dynamic Routing Between Capsules, vol. 2017 (2017)
Blalock, D., Jose Javier G.O., Frankle, J., Guttag, J.: What is the state of neural network pruning? In: Dhillon, I., Papailiopoulos, D., Sze, V., (eds.), Proceedings of Machine Learning and Systems, vol. 2, pp. 129–146 (2020)
Carbin, M., Frankle, J.: The Lottery Ticket Hypothesis. ICLR, pp. 1–42 (2019)
Shiri, P., Sharifi, R., Baniasadi, A.: Quick-CapsNet (QCN): a fast alternative to capsule networks. In: Proceedings of IEEE/ACS International Conference on Computer Systems and Applications, AICCSA, vol. 2020 (2020)
Shiri, P., Baniasadi, A.: Convolutional fully-connected capsule network (CFC-CapsNet). In: ACM International Conference Proceeding Series (2021)
LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage (pruning). Adv. Neural Inf. Process, Syst. (1990)
Lebedev, V., Lempitsky, V.: Fast Convnets Using Group-Wise Brain Damage, vol. 2016 (2016)
Hassibi, B., Stork, D.G., Wolff, G.J.: Optimal brain surgeon and general network pruning (1993)
Han, S., Pool, J., Tran, J., Dally, W.J.: Learning Both Weights and Connections for Efficient Neural Networks, vol. 2015 (2015)
Suzuki, T., Abe, H., Murata, T., Horiuchi, S., Ito, K., Wachi, T., Hirai, S., Yukishima, M., Nishimura, T.: Compressing deep neural network via spectral analysis, Spectral-pruning (2018)
Lee, N., Ajanthan, T., Gould, S., Torr, P.H.S.: A signal propagation perspective for pruning neural networks at initialization (2019)
Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimber, F., Van Den Oord, A., Dieleman, S., Kavukcuoglu, K.: Efficient Neural Audio Synthesis, vol. 6 (2018)
Gale, T., Elsen, E., Hooker, S.: The state of sparsity in deep neural networks (2019)
Yu, R., Li, A., Chen, C.F., Lai, J.H., Morariu, V.I., Han, X., Gao, M., Lin, C.Y., Davis, L.S.: Nisp: pruning networks using neuron importance score propagation (2018)
Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning, 11256–11264 (2019)
Lee, N., Ajanthan, T., Torr, P.H.S.: Snip: single-shot network pruning based on connection sensitivity (2019)
Jian Hao Luo, Wu, J., Lin, W.: Thinet: A Filter Level Pruning Method for Deep Neural Network Compression, vol. 2017 (2017)
Zhang, X., Zou, J., He, K., Sun, J.: Accelerating very deep convolutional networks for classification and detection. IEEE Trans. Pattern Anal. Mach. Intell. 38 (2016)
Rosario, V.M.D., Borin, E., Breternitz, M.: The multi-lane capsule network. IEEE Signal Process. Lett. 26(7), 1006–1010 (2019)
Rajasegaran, J., Jayasundara, V., Jayasekara, S., Jayasekara, H., Seneviratne, S., Rodrigo, R.: Going deeper with capsule networks, DeepCaps (2019)
Huang, W., Zhou, F.: DA-CapsNet: dual attention mechanism capsule network. Sci. Rep. (2020)
Rajasegaran, J., Jayasundara, V., Jayasekara, S., Jayasekara, H., Seneviratne, S., Rodrigo, R.: Deepcaps: going deeper with capsule networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2019, 10717–10725 (2019)
Molchanov, D., Ashukha, A., Vetrov, D.: Variational dropout sparsifies deep neural networks. In: Proceedings of the 34th International Conference on Machine Learning, ICML’17, vol. 70, pp. 2498–2507. JMLR.org (2017)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017)
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: The street view house numbers (SVHN) dataset (2011)
Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 and CIFAR-100 datasets (2009)
Lecun, Y.: The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/
Sharifi, R., Shiri, P., Baniasadi, A.: Zero-Skipping in Capsnet. Is It worth it? vol. 69 (2020)
Branchaud-Charron, F., Achkar, A., Jodoin, P.M.: Spectral Metric for Dataset Complexity Assessment, vol. 2019 (2019)
Acknowledgements
This research has been funded in part or completely by the Computing Hardware for Emerging Intelligent Sensory Applications (COHESA) project. COHESA is financed under the National Sciences and Engineering Research Council of Canada (NSERC) Strategic Networks grant number NETGP485577-15.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Shiri, P., Sharifi, R., Baniasadi, A. (2023). Resource-Aware Capsule Network. In: Wani, M.A., Palade , V. (eds) Deep Learning Applications, Volume 4. Advances in Intelligent Systems and Computing, vol 1434. Springer, Singapore. https://doi.org/10.1007/978-981-19-6153-3_11
Download citation
DOI: https://doi.org/10.1007/978-981-19-6153-3_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-6152-6
Online ISBN: 978-981-19-6153-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)