Skip to main content

TREND: Truncated Generalized Normal Density Estimation of Inception Embeddings for GAN Evaluation

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13683))

Included in the following conference series:

Abstract

Evaluating image generation models such as generative adversarial networks (GANs) is a challenging problem. A common approach is to compare the distributions of the set of ground truth images and the set of generated test images. The Frechét Inception distance is one of the most widely used metrics for evaluation of GANs, which assumes that the features from a trained Inception model for a set of images follow a normal distribution. In this paper, we argue that this is an over-simplified assumption, which may lead to unreliable evaluation results, and more accurate density estimation can be achieved using a truncated generalized normal distribution. Based on this, we propose a novel metric for accurate evaluation of GANs, named TREND (TRuncated gEneralized Normal Density estimation of inception embeddings). We demonstrate that our approach significantly reduces errors of density estimation, which consequently eliminates the risk of faulty evaluation results. Furthermore, the proposed metric significantly improves robustness of evaluation results against variation of the number of image samples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We omit subscript i for \(\mu \), \(\sigma \), \(\beta \), and G for simplicity.

  2. 2.

    The Mann-Whitney U test fails to reject the null hypothesis that the two distributions are statistically identical for 1992 (97.3%) out of 2048 dimensions.

References

  1. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: Proceedings of the International Conference on Machine Learning, pp. 214–223, August 2017

    Google Scholar 

  2. Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying MMD GANs. In: Proceedings of the International Conference on Learning Representations (2018)

    Google Scholar 

  3. Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. In: Proceedings of the International Conference on Learning Representations (2019)

    Google Scholar 

  4. Chong, M.J., Forsyth, D.: Effectively unbiased FID and Inception score and where to find them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6070–6079 (2020)

    Google Scholar 

  5. Deng, J., Dong, W., Socher, R., Li, L.J., Kai Li, Li Fei-Fei: Imagenet: a large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

    Google Scholar 

  6. Dhariwal, P., Nichol, A.Q.: Diffusion models beat GANs on image synthesis. In: Advances in Neural Information Processing Systems (2021)

    Google Scholar 

  7. Goodfellow, I.J., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27, pp. 2672–2680 (2014)

    Google Scholar 

  8. Hazami, L., Mama, R., Thurairatnam, R.: Efficient-VDVAE: less is more (2022). arXiv preprint arXiv:2203.13751

  9. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems. pp. 6629–6640 (2017)

    Google Scholar 

  10. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: Proceedings of the International Conference on Learning Representations (2018)

    Google Scholar 

  11. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2019

    Google Scholar 

  12. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2020

    Google Scholar 

  13. Krizhevsky, A., et al.: Learning multiple layers of features from tiny images. Technical report (2009)

    Google Scholar 

  14. Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., Aila, T.: Improved precision and recall metric for assessing generative models. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  15. Lalee, M., Nocedal, J., Plantenga, T.: On the implementation of an algorithm for large-scale equality constrained optimization. SIAM J. Optim. 8(3), 682–706 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  16. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the International Conference on Computer Vision, pp. 3730–3738, December 2015

    Google Scholar 

  17. Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: Proceedings of the International Conference on Learning Representations (2018)

    Google Scholar 

  18. Morozov, S., Voynov, A., Babenko, A.: On self-supervised image representations for GAN evaluation. In: Proceedings of the International Conference on Learning Representations (2021)

    Google Scholar 

  19. Naeem, M.F., Oh, S.J., Uh, Y., Choi, Y., Yoo, J.: Reliable fidelity and diversity metrics for generative models. In: Advances in Neural Information Processing Systems, vol. 119, pp. 7176–7185, July 2020

    Google Scholar 

  20. Parmar, G., Zhang, R., Zhu, J.Y.: On aliased resizing and surprising subtleties in GAN evaluation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11410–11420 (2022)

    Google Scholar 

  21. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: Proceedings of the International Conference on Learning Representations (2016)

    Google Scholar 

  22. Sajjadi, M.S.M., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. In: Advances in Neural Information Processing Systems, pp. 5234–5243 (2018)

    Google Scholar 

  23. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X., Chen, X.: Improved techniques for training GANs. In: Advances in Neural Information Processing Systems, pp. 2234–2242 (2016)

    Google Scholar 

  24. Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected GANs converge faster. In: Advances in Neural Information Processing Systems (2021)

    Google Scholar 

  25. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  26. Theis, L., van den Oord, A., Bethge, M.: A note on the evaluation of generative models. In: Proceedings of the International Conference on Learning Representations, April 2016

    Google Scholar 

  27. Xu, Q., et al.: An empirical study on evaluation metrics of generative adversarial networks. arXiv preprint (2018). arXiv:1806.07755

  28. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: Proceedings of the International Conference on Machine Learning, pp. 7354–7363, June 2019

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the Artificial Intelligence Graduate School Program, Yonsei University under Grant 2020-0-01361, and in part by the Ministry of Trade, Industry and Energy (MOTIE) under Grant P0014268.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jong-Seok Lee .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1396 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lee, J., Lee, JS. (2022). TREND: Truncated Generalized Normal Density Estimation of Inception Embeddings for GAN Evaluation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13683. Springer, Cham. https://doi.org/10.1007/978-3-031-20050-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20050-2_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20049-6

  • Online ISBN: 978-3-031-20050-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics