Advertisement

Use of Neural Signals to Evaluate the Quality of Generative Adversarial Network Performance in Facial Image Generation

  • Zhengwei WangEmail author
  • Graham Healy
  • Alan F. Smeaton
  • Tomás E. Ward
Article

Abstract

There is a growing interest in using generative adversarial networks (GANs) to produce image content that is indistinguishable from real images as judged by a typical person. A number of GAN variants for this purpose have been proposed; however, evaluating GAN performance is inherently difficult because current methods for measuring the quality of their output are not always consistent with what a human perceives. We propose a novel approach that combines a brain-computer interface (BCI) with GANs to generate a measure we call Neuroscore, which closely mirrors the behavioral ground truth measured from participants tasked with discerning real from synthetic images. This technique we call a neuro-AI interface, as it provides an interface between a human’s neural systems and an AI process. In this paper, we first compare the three most widely used metrics in the literature for evaluating GANs in terms of visual quality and compare their outputs with human judgments. Secondly, we propose and demonstrate a novel approach using neural signals and rapid serial visual presentation (RSVP) that directly measures a human perceptual response to facial production quality, independent of a behavioral response measurement. The correlation between our proposed Neuroscore and human perceptual judgments has Pearson correlation statistics: r(48) = − 0.767, p = 2.089e − 10. We also present the bootstrap result for the correlation i.e., p ≤ 0.0001. Results show that our Neuroscore is more consistent with human judgment compared with the conventional metrics we evaluated. We conclude that neural signals have potential applications for high-quality, rapid evaluation of GANs in the context of visual image synthesis.

Keywords

Generative adversarial networks Rapid serial visual presentation Human judgments Brain-computer interface Neuro-AI interface 

Notes

Funding Information

This work is funded as part of the Insight Centre for Data Analytics which is supported by Science Foundation Ireland under Grant Number SFI/12/RC/2289.

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Abbass HA. Social integration of artificial intelligence: functions, automation allocation logic and human-autonomy trust. Cogn Comput 2019;11:159–71.CrossRefGoogle Scholar
  2. 2.
    Arjovsky M, Chintala S, Bottou L. 2017. Wasserstein GAN. arXiv:170107875.
  3. 3.
    Bakdash JZ, Marusich LR. Repeated measures correlation. Front Psychol 2017;8:456.  https://doi.org/10.3389/fpsyg.2017.00456.CrossRefPubMedPubMedCentralGoogle Scholar
  4. 4.
    Barratt S, Sharma R. 2018. A note on the inception score. arXiv:180101973.
  5. 5.
    Berthelot D, Schumm T, Metz L. 2017. BEGAN: boundary equilibrium generative adversarial networks. arXiv:170310717.
  6. 6.
    Blackwood D, Muir W. Cognitive brain potentials and their application. Br J Psychiatry 1990;157(S9): 96–101.CrossRefGoogle Scholar
  7. 7.
    Borji A. 2018. Pros and cons of GAN evaluation measures. arXiv:180203446.
  8. 8.
    Cai Z, Makino S, Rutkowski TM. Brain evoked potential latencies optimization for spatial auditory brain–computer interface. Cogn Comput 2015;7(1):34–43.CrossRefGoogle Scholar
  9. 9.
    Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. Imagenet: a large-scale hierarchical image database. Proceedings of the conference on computer vision and pattern recognition. IEEE; 2009. p. 248–55.Google Scholar
  10. 10.
    Doborjeh ZG, Doborjeh MG, Kasabov N. Attentional bias pattern recognition in spiking neural networks from spatio-temporal EEG data. Cogn Comput 2018;10(1):35–48.CrossRefGoogle Scholar
  11. 11.
    Efron B, Tibshirani RJ. 1994. An introduction to the bootstrap. CRC Press.Google Scholar
  12. 12.
    Forsyth DA, Ponce J. 2012. Computer vision: a modern approach, 2nd Ed. Pearson Education.Google Scholar
  13. 13.
    Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. 2014. Generative adversarial nets. In: Advances in neural information processing systems, p. 2672–80.Google Scholar
  14. 14.
    Gretton A, Borgwardt KM, Rasch M, Schölkopf B, Smola AJ. 2007. A kernel method for the two-sample-problem. In: Advances in neural information processing systems, p. 513–20.Google Scholar
  15. 15.
    Healy G, Wang Z, Gurrin C, Ward T, Smeaton AF. 2017. An EEG image-search dataset: a first-of-its-kind in IR/IIR. NAILS: neurally augmented image labelling strategies.Google Scholar
  16. 16.
    Healy G, Ward TE, Gurrin C, Smeaton AF. 2017. Overview of NTCIR-13 nails task. In: The 13th NTCIR 2016-2017 evaluation of information access technologies conference. Tokyo.Google Scholar
  17. 17.
    Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S. 2017. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in neural information processing systems, p. 6626–37.Google Scholar
  18. 18.
    Hu J, He K, Xiong J. Comparison of event-related potentials between conceptually similar chinese words, english words, and pictures. Cogn Comput 2010;2(1):50–61.CrossRefGoogle Scholar
  19. 19.
    Isola P, Zhu JY, Zhou T, Efros AA. 2017. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE, p. 1125–34.Google Scholar
  20. 20.
    Karras T, Aila T, Laine S, Lehtinen J. 2017. Progressive growing of GANs for improved quality, stability, and variation. arXiv:171010196.
  21. 21.
    Kim KH, Kim JH, Yoon J, Jung KY. Influence of task difficulty on the features of event-related potential during visual oddball task. Neurosci Lett 2008;445(2):179–83.CrossRefPubMedGoogle Scholar
  22. 22.
    Kurakin A, Goodfellow I, Bengio S. 2016. Adversarial examples in the physical world. arXiv:160702533.
  23. 23.
    Lees S, Dayan N, Cecotti H, McCullagh P, Maguire L, Lotte F, Coyle D. A review of rapid serial visual presentation-based brain-computer interfaces. J Neural Eng 2018;15(2):021,001.CrossRefGoogle Scholar
  24. 24.
    Li J, Zhang Z, He H. Hierarchical convolutional neural networks for EEG-based emotion recognition. Cogn Comput 2018;10:1–3.CrossRefGoogle Scholar
  25. 25.
    Li Y, Swersky K, Zemel R. 2015. Generative moment matching networks. In: International conference on machine learning, p. 1718–27.Google Scholar
  26. 26.
    Liu Z, Luo P, Wang X, Tang X. 2015. Deep learning face attributes in the wild. In: IEEE International conference on computer vision (ICCV).Google Scholar
  27. 27.
    Luck SJ. 2014. An introduction to the event-related potential technique. MIT Press.Google Scholar
  28. 28.
    Luck SJ, Hillyard SA. Electrophysiological evidence for parallel and serial processing during visual search. Percept Psychophys 1990;48(6):603–17.CrossRefPubMedGoogle Scholar
  29. 29.
    Mao X, Li Q, Xie H, Lau RY, Wang Z, Smolley SP. 2017. Least squares generative adversarial networks. In: IEEE International conference on computer vision, p. 2813–21.Google Scholar
  30. 30.
    Metz L, Poole B, Pfau D, Sohl-Dickstein J. 2016. Unrolled generative adversarial networks. arXiv:161102163.
  31. 31.
    Polich J. Updating P300: an integrative theory of P3a and P3b. Clin Neurophysiol 2007;118(10):2128–2148.CrossRefPubMedPubMedCentralGoogle Scholar
  32. 32.
    Radford A, Metz L, Chintala S. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:151106434.
  33. 33.
    Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. 2016. Improved techniques for training GANs. In: Advances in neural information processing systems, p. 2234–42.Google Scholar
  34. 34.
    Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W, Webb R. Learning from simulated and unsupervised images through adversarial training. Proceedings of the conference on computer vision and pattern recognition. IEEE; 2017. p. 5.Google Scholar
  35. 35.
    Solon AJ, Gordon SM, Lance B, Lawhern V. Deep learning approaches for P300 classification in image triage: applications to the NAILS task. Proceedings of the 13th NTCIR conference on evaluation of information access technologies, NTCIR-13. Tokyo; 2017. p. 5–8.Google Scholar
  36. 36.
    Spence R, Witkowski M. Rapid serial visual presentation: design for cognition. Heidelberg: Springer; 2013.CrossRefGoogle Scholar
  37. 37.
    Sur S, Sinha V. Event-related potential: an overview. Indus Psych J 2009;18(1):70.CrossRefGoogle Scholar
  38. 38.
    Sutton S, Braren M, Zubin J, John E. Evoked-potential correlates of stimulus uncertainty. Science 1965;150(3700):1187–88.CrossRefPubMedGoogle Scholar
  39. 39.
    Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. Proceedings of the conference on computer vision and pattern recognition. IEEE; 2016. p. 2818–26.Google Scholar
  40. 40.
    Theis L, Oord A, Bethge M. 2015. A note on the evaluation of generative models. arXiv:151101844.
  41. 41.
    Treder MS, Porbadnigk AK, Avarvand FS, Müller KR, Blankertz B. The LDA beamformer: optimal estimation of ERP source time series using linear discriminant analysis. Neuroimage 2016;129:279–291.CrossRefPubMedGoogle Scholar
  42. 42.
    Wang Z, Healy G, Smeaton AF, Ward TE. An investigation of triggering approaches for the rapid serial visual presentation paradigm in brain computer interfacing. 27th Irish signals and systems conference. IEEE; 2016. p. 1–6.Google Scholar
  43. 43.
    Wang Z, Healy G, Smeaton AF, Ward TE. 2018. A review of feature extraction and classification algorithms for image RSVP based BCI. Signal Processing and Machine Learning for Brain-machine Interfaces, 243–70.Google Scholar
  44. 44.
    Wang Z, Healy G, Smeaton AF, Ward TE. Spatial filtering pipeline evaluation of cortically coupled computer vision system for rapid serial visual presentation. Brain-Comput Interf 2018;5:132–45.CrossRefGoogle Scholar
  45. 45.
    Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol 2002;113(6):767–91.CrossRefPubMedGoogle Scholar
  46. 46.
    Xu Q, Huang G, Yuan Y, Guo C, Sun Y, Wu F, Weinberger K. 2018. An empirical study on evaluation metrics of generative adversarial networks. arXiv:180607755.
  47. 47.
    Yu F, Seff A, Zhang Y, Song S, Funkhouser T, Xiao J. 2015. LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv:150603365.
  48. 48.
    Zhu JY, Park T, Isola P, Efros AA. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, p. 2223–32.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Insight Centre for Data AnalyticsDublin City UniversityDublin 9Ireland

Personalised recommendations