Skip to main content

GAN Slimming: All-in-One GAN Compression by a Unified Optimization Framework

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12349))

Included in the following conference series:

Abstract

Generative adversarial networks (GANs) have gained increasing popularity in various computer vision applications, and recently start to be deployed to resource-constrained mobile devices. Similar to other deep models, state-of-the-art GANs suffer from high parameter complexities. That has recently motivated the exploration of compressing GANs (usually generators). Compared to the vast literature and prevailing success in compressing deep classifiers, the study of GAN compression remains in its infancy, so far leveraging individual compression techniques instead of more sophisticated combinations. We observe that due to the notorious instability of training GANs, heuristically stacking different compression techniques will result in unsatisfactory results. To this end, we propose the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming (GS). GS seamlessly integrates three mainstream compression techniques: model distillation, channel pruning and quantization, together with the GAN minimax objective, into one unified optimization form, that can be efficiently optimized from end to end. Without bells and whistles, GS largely outperforms existing options in compressing image-to-image translation GANs. Specifically, we apply GS to compress CartoonGAN, a state-of-the-art style transfer network, by up to \(\mathbf {{47}{\times }}\) times, with minimal visual quality degradation. Codes and pre-trained models can be found at https://github.com/TAMU-VITA/GAN-Slimming.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A concurrent work [65] jointly optimized pruning, decomposition, and quantization, into one unified framework for reducing the memory storage/access.

  2. 2.

    We only quantize W, while always leaving \(\gamma \) unquantized.

  3. 3.

    Following [3], we use student networks with 1/2 channels of the original generator.

  4. 4.

    Available at https://github.com/maciej3031/comixify.

  5. 5.

    Following [10], we use color matching as the post-processing on all compared methods, for better visual display quality.

References

  1. Bulò, S.R., Porzi, L., Kontschieder, P.: Dropout distillation. In: International Conference on Machine Learning, pp. 99–107 (2016)

    Google Scholar 

  2. Chen, H., et al.: Frequency domain compact 3D convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1641–1650 (2020)

    Google Scholar 

  3. Chen, H., et al.: Distilling portable generative adversarial networks for image translation. In: AAAI Conference on Artificial Intelligence (2020)

    Google Scholar 

  4. Chen, H., Wang, Y., Xu, C., Xu, C., Tao, D.: Learning student networks via feature embedding. IEEE Trans. Neural Netw. Learn. Syst. (2020)

    Google Scholar 

  5. Chen, H., et al.: Data-free learning of student networks. In: IEEE International Conference on Computer Vision, pp. 3514–3522 (2019)

    Google Scholar 

  6. Chen, H., et al.: AdderNet: do we really need multiplications in deep learning? In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1468–1477 (2020)

    Google Scholar 

  7. Chen, Y., Lai, Y.K., Liu, Y.J.: CartoonGAN: generative adversarial networks for photo cartoonization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 9465–9474 (2018)

    Google Scholar 

  8. Courbariaux, M., Bengio, Y., David, J.P.: BinaryConnect: training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems, pp. 3123–3131 (2015)

    Google Scholar 

  9. Fu, Y., Chen, W., Wang, H., Li, H., Lin, Y., Wang, Z.: AutoGAN-Distiller: searching to compress generative adversarial networks. In: International Conference on Machine Learning (2020)

    Google Scholar 

  10. Gatys, L.A., Bethge, M., Hertzmann, A., Shechtman, E.: Preserving color in neural artistic style transfer. arXiv preprint arXiv:1606.05897 (2016)

  11. Gong, X., Chang, S., Jiang, Y., Wang, Z.: AutoGAN: neural architecture search for generative adversarial networks. In: IEEE International Conference on Computer Vision, pp. 3224–3234 (2019)

    Google Scholar 

  12. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  13. Gui, J., Sun, Z., Wen, Y., Tao, D., Ye, J.: A review on generative adversarial networks: algorithms, theory, and applications. arXiv preprint arXiv:2001.06937 (2020)

  14. Gui, S., Wang, H., Yang, H., Yu, C., Wang, Z., Liu, J.: Model compression with adversarial robustness: a unified optimization framework. In: Advances in Neural Information Processing Systems, pp. 1283–1294 (2019)

    Google Scholar 

  15. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)

    Google Scholar 

  16. Guo, T., et al.: On positive-unlabeled classification in GAN. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 8385–8393 (2020)

    Google Scholar 

  17. Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C.: GhostNet: more features from cheap operations. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1580–1589 (2020)

    Google Scholar 

  18. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015)

  19. He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., Han, S.: AMC: AutoML for model compression and acceleration on mobile devices. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 815–832. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_48

    Chapter  Google Scholar 

  20. He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: IEEE International Conference on Computer Vision, pp. 1389–1397 (2017)

    Google Scholar 

  21. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems, pp. 6626–6637 (2017)

    Google Scholar 

  22. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  23. Hu, H., Peng, R., Tai, Y.W., Tang, C.K.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250 (2016)

  24. Huang, Z., Wang, N.: Data-driven sparse structure selection for deep neural networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 317–334. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_19

    Chapter  Google Scholar 

  25. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18(1), 6869–6898 (2017)

    MathSciNet  MATH  Google Scholar 

  26. Jiang, Y., et al.: EnlightenGAN: deep light enhancement without paired supervision. arXiv preprint arXiv:1906.06972 (2019)

  27. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  28. Jung, S., et al.: Learning to quantize deep networks by optimizing quantization intervals with task loss. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4350–4359 (2019)

    Google Scholar 

  29. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations (2018)

    Google Scholar 

  30. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)

    Google Scholar 

  31. Kim, S., Xing, E.P.: Tree-guided group Lasso for multi-task regression with structured sparsity. Ann. Appl. Stat. 6(3), 1095–1117 (2012)

    Article  MathSciNet  Google Scholar 

  32. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  33. Krizhevsky, A.: Learning multiple layers of features from tiny images. Master’s thesis, University of Toronto (2009)

    Google Scholar 

  34. Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better. In: IEEE International Conference on Computer Vision, pp. 8878–8887 (2019)

    Google Scholar 

  35. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)

    Google Scholar 

  36. Li, M., Lin, J., Ding, Y., Liu, Z., Zhu, J.Y., Han, S.: GAN compression: efficient architectures for interactive conditional GANs. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5284–5294 (2020)

    Google Scholar 

  37. Lin, J., Rao, Y., Lu, J., Zhou, J.: Runtime neural pruning. In: Advances in Neural Information Processing Systems, pp. 2181–2191 (2017)

    Google Scholar 

  38. Liu, S., Du, J., Nan, K., Wang, A., Lin, Y., et al.: AdaDeep: a usage-driven, automated deep model compression framework for enabling ubiquitous intelligent mobiles. arXiv preprint arXiv:2006.04432 (2020)

  39. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: IEEE International Conference on Computer Vision, pp. 2736–2744 (2017)

    Google Scholar 

  40. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: IEEE International Conference on Computer Vision, pp. 3730–3738 (2015)

    Google Scholar 

  41. Lopez-Paz, D., Bottou, L., Schölkopf, B., Vapnik, V.: Unifying distillation and privileged information. arXiv preprint arXiv:1511.03643 (2015)

  42. Luo, J.H., Wu, J., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: IEEE International Conference on Computer Vision, pp. 5058–5066 (2017)

    Google Scholar 

  43. Mishra, A., Marr, D.: Apprentice: using knowledge distillation techniques to improve low-precision network accuracy. arXiv preprint arXiv:1711.05852 (2017)

  44. Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  45. Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 11264–11272 (2019)

    Google Scholar 

  46. Parikh, N., Boyd, S., et al.: Proximal algorithms. Found. Trends Optim. 1(3), 127–239 (2014)

    Article  Google Scholar 

  47. Polino, A., Pascanu, R., Alistarh, D.: Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668 (2018)

  48. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32

    Chapter  Google Scholar 

  49. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: Advances in Neural Information Processing Systems, pp. 2234–2242 (2016)

    Google Scholar 

  50. Sanakoyeu, A., Kotovenko, D., Lang, S., Ommer, B.: A style-aware content loss for real-time HD style transfer. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 715–731. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01237-3_43

    Chapter  Google Scholar 

  51. Shen, M., Han, K., Xu, C., Wang, Y.: Searching for accurate binary neural architectures. In: IEEE International Conference on Computer Vision Workshops (2019)

    Google Scholar 

  52. Shu, H., et al.: Co-evolutionary compression for unpaired image translation. In: IEEE International Conference on Computer Vision, pp. 3235–3244 (2019)

    Google Scholar 

  53. Singh, P., Verma, V.K., Rai, P., Namboodiri, V.: Leveraging filter correlations for deep model compression. In: IEEE Winter Conference on Applications of Computer Vision, pp. 835–844 (2020)

    Google Scholar 

  54. Theis, L., Korshunova, I., Tejani, A., Huszár, F.: Faster gaze prediction with dense networks and Fisher pruning. arXiv preprint arXiv:1801.05787 (2018)

  55. Tung, F., Mori, G.: CLIP-Q: deep network compression learning by in-parallel pruning-quantization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 7873–7882 (2018)

    Google Scholar 

  56. Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: HAQ: hardware-aware automated quantization with mixed precision. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 8612–8620 (2019)

    Google Scholar 

  57. Wang, M., Zhang, Q., Yang, J., Cui, X., Lin, W.: Graph-adaptive pruning for efficient inference of convolutional neural networks. arXiv preprint arXiv:1811.08589 (2018)

  58. Wang, Y., Xu, C., Chunjing, X., Xu, C., Tao, D.: Learning versatile filters for efficient convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1608–1618 (2018)

    Google Scholar 

  59. Wang, Y., Xu, C., Xu, C., Tao, D.: Adversarial learning of portable student networks. In: AAAI Conference on Artificial Intelligence, pp. 4260–4267 (2018)

    Google Scholar 

  60. Wang, Y., Xu, C., Xu, C., Tao, D.: Packing convolutional neural networks in the frequency domain. IEEE Trans. Pattern Anal. Mach. Intell. 41(10), 2495–2510 (2018)

    Article  Google Scholar 

  61. Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: Advances in Neural Information Processing Systems, pp. 2074–2082 (2016)

    Google Scholar 

  62. Wu, J., Wang, Y., Wu, Z., Wang, Z., Veeraraghavan, A., Lin, Y.: Deep-\(k\)-means: re-training and parameter sharing with harder cluster assignments for compressing deep convolutions. In: International Conference on Machine Learning, pp. 5363–5372 (2018)

    Google Scholar 

  63. Yang, H., Zhu, Y., Liu, J.: ECC: platform-independent energy-constrained deep neural network compression via a bilinear regression model. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 11206–11215 (2019)

    Google Scholar 

  64. Yang, S., Wang, Z., Wang, Z., Xu, N., Liu, J., Guo, Z.: Controllable artistic text style transfer via shape-matching GAN. In: IEEE International Conference on Computer Vision, pp. 4442–4451 (2019)

    Google Scholar 

  65. Zhao, Y., et al.: SmartExchange: trading higher-cost memory storage/access for lower-cost computation. arXiv preprint arXiv:2005.03403 (2020)

  66. Zhu, C., Han, S., Mao, H., Dally, W.J.: Trained ternary quantization. arXiv preprint arXiv:1612.01064 (2016)

  67. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhangyang Wang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 7073 KB)

A Image Generation with SNGAN

A Image Generation with SNGAN

We have demonstrated the effectiveness of GS in compressing image-to-image GANs (e.g., CycleGAN  [67], StyleGAN  [50]) in the main text. Here we show GS is also generally applicable to noise-to-image GANs (e.g., SNGAN [44]). SNGAN with the ResNet backbone is one of the most popular noise-to-image GANs, with state-of-the-art performance on a few datasets such as CIFAR10 [33]. The generator in SNGAN has 7 convolution layers with 1.57 GFLOPs, with \(32 \times 32\) image outputs. We evaluate SNGAN generator compression on the CIFAR-10 dataset. Inception Score (IS) [49] is used to measure image generation and style transfer quality. We use latency (FLOPs) and model size to evaluate the network efficiency. Quantitative and visualization results are shown in Table 3 and Fig. 6 respectively. GS is able to compress SNGAN by up to \(8{\times }\) (in terms of model size), with minimum drop in both visual quality and the quantitative IS value of generated images.

Table 3. SNGAN compression results.
Fig. 6.
figure 6

CIFAR-10 images generation by SNGAN (original and compressed). Leftmost column: images generated by original SNGAN. The rest columns: images generated by GS-8 compressed SNGAN, with different compression ratios. Images are randomly selected instead of cherry-picked.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, H., Gui, S., Yang, H., Liu, J., Wang, Z. (2020). GAN Slimming: All-in-One GAN Compression by a Unified Optimization Framework. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12349. Springer, Cham. https://doi.org/10.1007/978-3-030-58548-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58548-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58547-1

  • Online ISBN: 978-3-030-58548-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics