Skip to main content

Quality Assessment Method for GAN Based on Modified Metrics Inception Score and Fréchet Inception Distance

  • Conference paper
  • First Online:
Software Engineering Perspectives in Intelligent Systems (CoMeSySo 2020)

Abstract

The article examines the problem of quality assessment for generative adversarial networks (GANs). There is no unified and universal metric to compare and evaluate GAN. Well-known approaches for the GAN quality assessment are focused on images generating neural networks. This paper considers the problem of the quality determination of arbitrary GAN operating with various data sets. For problem solution, a quality assessment method of arbitrary GAN is proposed, which differs by the modification of the calculation formulas Inception Score and Fréchet Inception Distance. The included changes allow the use of these metrics to assess and compare arbitrary GANs. The developed method was tested during experiments on the objects generation from marked (MNIST) and unmarked (Human Activity Recognition Using Smartphones and Epileptic Seizure Recognition) datasets. The obtained results confirm the possibility of applying the modified metrics Inception Score and Fréchet Inception Distance to assess the quality of arbitrary GANs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Li, W., Gauci, M., Groß, R.: A coevolutionary approach to learn animal behavior through controlled interaction. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 223–230. ACM, Amsterdam (2013)

    Google Scholar 

  2. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Change Loy, C.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 1–16. Springer, Cham (2018)

    Google Scholar 

  3. Chithra Apoorva, D.A., Neetha, K.S., Brahmananda, S.H., Kumar, M.: A GAN model to produce Photo realistic Images via text command. Aust. J. Wireless Technol. Mob. Secur. 1(1), 56–61 (2019)

    Google Scholar 

  4. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. IEEE, Long Beach, USA (2019)

    Google Scholar 

  5. Iqbal, T., Ali, H.: Generative adversarial network for medical images (MI-GAN). J. Med. Syst. 42(11), 231 (2018)

    Article  Google Scholar 

  6. Paganini, M., de Oliveira, L., Nachman, B.: Accelerating science with generative adversarial networks: an application to 3D particle showers in multilayer calorimeters. Phys. Rev. Lett. 120(4), 042003 (2018)

    Article  Google Scholar 

  7. Nowozin, S., Cseke, B., Tomioka, R.: f-GAN: Training generative neural samplers using variational divergence minimization. In: Advances in Neural Information Processing Systems, pp. 271–279. Curran Associates, Inc., Barcelona (2016)

    Google Scholar 

  8. Adolphs, L., Daneshmand, H., Lucchi, A., Hofmann, T.: A curvature exploitation approach. In: The 22nd International Conference on Artificial Intelligence and Statistics, pp. 486–495. Naha, Okinawa, Japan (2019)

    Google Scholar 

  9. Wei X., Gong, B., Liu, Z., Lu, W., Wang, L.: Improving the improved training of Wasserstein GANs: A consistency term and its dual effect. arXiv preprint arXiv:1803.01541 (2018)

  10. Sajjadi, M.S., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.: Assessing generative models via precision and recall. In: Advances in Neural Information Processing Systems, pp. 5228–5237. Curran Associates, Inc., Barcelona (2016)

    Google Scholar 

  11. Borji, A.: Pros and cons of GAN evaluation measures. Comput. Vis. Image Understanding 179, 41–65 (2019)

    Article  Google Scholar 

  12. Shmelkov, K., Schmid, C., Alahari, K.: How good is my GAN? In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 213–229. Springer, Cham (2018)

    Google Scholar 

  13. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: Advances in Neural Information Processing Systems, pp. 2234–2242. Curran Associates, Inc., Barcelona (2016)

    Google Scholar 

  14. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826. IEEE, Las Vegas (2016)

    Google Scholar 

  15. Kwitt, R., Uhl, A.: Image similarity measurement by Kullback-Leibler divergences between complex wavelet subband statistics for texture retrieval. In: 2008 15th IEEE International Conference on Image Processing, pp. 933–936. IEEE (2008, October)

    Google Scholar 

  16. Gong, X., Chang, S., Jiang, Y., Wang, Z.: Autogan: Neural architecture search for generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3224–3234. IEEE (2019)

    Google Scholar 

  17. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances in Neural Information Processing Systems, pp. 6626–6637. Curran Associates, Inc. (2017)

    Google Scholar 

  18. Valle, R.: Hands-On Generative Adversarial Networks with Keras: Your guide to implementing next-generation generative adversarial networks. Packt Publishing Ltd. (2019)

    Google Scholar 

  19. Barratt, S., Sharma, R.: A note on the inception score. arXiv preprint arXiv:1801.01973 (2018)

  20. Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: A public domain dataset for human activity recognition using smartphones. In: 21st European Symposium on Artificial Neural Networks, Computational Intelligence And Machine Learning, pp. 437–442. ESANN, Bruges (2013, April)

    Google Scholar 

  21. San-Segundo, R., Gil-Martín, M., D’Haro-Enríquez, L.F., Pardo, J.M.: Classification of epileptic EEG recordings using signal transforms and convolutional neural networks. Comput. Biol. Med. 109, 148–158 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Artem Obukhov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Obukhov, A., Krasnyanskiy, M. (2020). Quality Assessment Method for GAN Based on Modified Metrics Inception Score and Fréchet Inception Distance. In: Silhavy, R., Silhavy, P., Prokopova, Z. (eds) Software Engineering Perspectives in Intelligent Systems. CoMeSySo 2020. Advances in Intelligent Systems and Computing, vol 1294. Springer, Cham. https://doi.org/10.1007/978-3-030-63322-6_8

Download citation

Publish with us

Policies and ethics