Advertisement

CPGAN: Content-Parsing Generative Adversarial Networks for Text-to-Image Synthesis

Conference paper
  • 969 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12349)

Abstract

Typical methods for text-to-image synthesis seek to design effective generative architecture to model the text-to-image mapping directly. It is fairly arduous due to the cross-modality translation. In this paper we circumvent this problem by focusing on parsing the content of both the input text and the synthesized image thoroughly to model the text-to-image consistency in the semantic level. Particularly, we design a memory structure to parse the textual content by exploring semantic correspondence between each word in the vocabulary to its various visual contexts across relevant images during text encoding. Meanwhile, the synthesized image is parsed to learn its semantics in an object-aware manner. Moreover, we customize a conditional discriminator to model the fine-grained correlations between words and image sub-regions to push for the text-image semantic alignment. Extensive experiments on COCO dataset manifest that our model advances the state-of-the-art performance significantly (from 35.69 to 52.73 in Inception Score).

Keywords

Text-to-image synthesis Content-Parsing Generative Adversarial Networks Memory structure Cross-modality 

Notes

Acknowledgements

This work was supported by National Natural Science Foundation of China (NSFC) under Grant 61972012 and 61732016.

Supplementary material

504439_1_En_29_MOESM1_ESM.pdf (317 kb)
Supplementary material 1 (pdf 316 KB)

References

  1. 1.
    Anderson, P., et al.: Bottom-up and top-down attention for image captioning and visual question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6077–6086 (2018)Google Scholar
  2. 2.
    Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis (2017)Google Scholar
  3. 3.
    Cao, C., Lu, F., Li, C., Lin, S., Shen, X.: Makeup removal via bidirectional tunable de-makeup network. IEEE Trans. Multimedia (TMM) 21(11), 2750–2761 (2019)CrossRefGoogle Scholar
  4. 4.
    Cha, M., Gwon, Y., Kung, H.: Adversarial nets with perceptual losses for text-to-image synthesis. In: 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6. IEEE (2017)Google Scholar
  5. 5.
    Cha, M., Gwon, Y.L., Kung, H.: Adversarial learning of semantic relevance in text to image synthesis. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), vol. 33, pp. 3272–3279 (2019)Google Scholar
  6. 6.
    Das, R., Zaheer, M., Reddy, S., Mccallum, A.: Question answering on knowledge bases and text using universal schema and memory networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 358–365 (2017)Google Scholar
  7. 7.
    Feng, Y., Zhang, S., Zhang, A., Wang, D., Abel, A.: Memory-augmented neural machine translation. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1390–1399 (2017)Google Scholar
  8. 8.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NIPS), pp. 2672–2680 (2014)Google Scholar
  9. 9.
    Hao, D., Yu, S., Chao, W., Guo, Y.: Semantic image synthesis via adversarial learning. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 5706–5714 (2017)Google Scholar
  10. 10.
    Hinz, T., Heinrich, S., Wermter, S.: Semantic object accuracy for generative text-to-image synthesis. arXiv:1910.13321 (2019)
  11. 11.
    Hong, S., Yang, D., Choi, J., Lee, H.: Inferring semantic layout for hierarchical text-to-image synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7986–7994 (2018)Google Scholar
  12. 12.
    Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)
  13. 13.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1125–1134 (2017)Google Scholar
  14. 14.
    Weston, J., Chopra, S., Bordes, A.: Memory networks. In: International Conference on Learning Representations (ICLR) (2015)Google Scholar
  15. 15.
    Lao, Q., Havaei, M., Pesaranghader, A., Dutil, F., Jorio, L.D., Fevens, T.: Dual adversarial inference for text-to-image synthesis. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 7567–7576 (2019)Google Scholar
  16. 16.
    Li, B., Qi, X., Lukasiewicz, T., Torr, P.: Controllable text-to-image generation. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 2063–2073 (2019)Google Scholar
  17. 17.
    Li, W., et al.: Object-driven text-to-image synthesis via adversarial training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12174–12182 (2019)Google Scholar
  18. 18.
    Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: European Conference on Computer Vision (ECCV), pp. 740–755 (2014)Google Scholar
  19. 19.
    Liu, Y., Li, Y., You, S., Lu, F.: Unsupervised learning for intrinsic image decomposition from a single image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3248–3257 (2020)Google Scholar
  20. 20.
    Liu, Y., Lu, F.: Separate in latent space: unsupervised single image layer separation. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 11661–11668 (2020)Google Scholar
  21. 21.
    Lv, F., Lu, F.: Attention-guided low-light image enhancement. arXiv preprint arXiv:1908.00682 (2019)
  22. 22.
    Ma, C., Shen, C., Dick, A., Den Hengel, A.V.: Visual question answering with memory-augmented networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6975–6984 (2018)Google Scholar
  23. 23.
    Mansimov, E., Parisotto, E., Ba, J., Salakhutdinov, R.: Generating images from captions with attention. In: International Conference on Learning Representations (ICLR) (2016)Google Scholar
  24. 24.
    Maruf, S., Haffari, G.: Document context neural machine translation with memory networks. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 1275–1284 (2018)Google Scholar
  25. 25.
    Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: International Conference on Learning Representations (ICLR) (2018)Google Scholar
  26. 26.
    Mohtarami, M., Baly, R., Glass, J., Nakov, P., Màrquez, L., Moschitti, A.: Automatic stance detection using end-to-end memory networks. arXiv preprint arXiv:1804.07581 (2018)
  27. 27.
    Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: 2008 Sixth Indian Conference on Computer Vision, Graphics and Image Processing, pp. 722–729. IEEE (2008)Google Scholar
  28. 28.
    Niu, Y., et al.: Pathological evidence exploration in deep retinal image diagnosis. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), vol. 33, pp. 1093–1101 (2019)Google Scholar
  29. 29.
    Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: Proceedings of the 34rd International Conference on Machine Learning (ICML), pp. 2642–2651 (2017)Google Scholar
  30. 30.
    Pei, W., Zhang, J., Wang, X., Ke, L., Shen, X., Tai, Y.W.: Memory-attended recurrent network for video captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8347–8356 (2019)Google Scholar
  31. 31.
    Qiao, T., Zhang, J., Xu, D., Tao, D.: Learn, imagine and create: text-to-image generation from prior knowledge. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 885–895 (2019)Google Scholar
  32. 32.
    Qiao, T., Zhang, J., Xu, D., Tao, D.: MirrorGAN: learning text-to-image generation by redescription. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4321–4330 (2019)Google Scholar
  33. 33.
    Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)
  34. 34.
    Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In: Proceedings of the 33rd International Conference on Machine Learning (ICML) (2016)Google Scholar
  35. 35.
    Reed, S., et al.: Parallel multiscale autoregressive density estimation. In: Proceedings of the 34rd International Conference on Machine Learning (ICML), pp. 2912–2921 (2017)Google Scholar
  36. 36.
    Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: Advances in neural information processing systems (NIPS), pp. 2234–2242 (2016)Google Scholar
  37. 37.
    Reed, S., Van Den Oord, A., Kalchbrenner, N., Bapst, V., Botvinick, M., De Freitas, N.: Generating interpretable images with controllable structure. In: International Conference on Learning Representations (ICLR) (2017)Google Scholar
  38. 38.
    Sukhbaatar, S., Weston, J., Fergus, R., et al.: End-to-end memory networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 2440–2448 (2015)Google Scholar
  39. 39.
    Tan, L., Li, Y., Zhang, Y.: Semantics-enhanced adversarial nets for text-to-image synthesis. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 10501–10510 (2019)Google Scholar
  40. 40.
    Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The caltech-UCSD birds-200-2011 dataset (2011)Google Scholar
  41. 41.
    Wang, S., Mazumder, S., Liu, B., Zhou, M., Chang, Y.: Target-sensitive memory networks for aspect sentiment classification. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 957–967 (2018)Google Scholar
  42. 42.
    Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853 (2015)
  43. 43.
    Xu, T., et al.: AttnGAN: fine-grained text to image generation with attentional generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1316–1324 (2018)Google Scholar
  44. 44.
    Yin, G., Liu, B., Sheng, L., Yu, N., Wang, X., Shao, J.: Semantics disentangling for text-to-image generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2327–2336 (2019)Google Scholar
  45. 45.
    Yu, H., Cai, M., Liu, Y., Lu, F.: What I see is what you see: joint attention learning for first and third person video co-analysis. In: Proceedings of the 27th ACM International Conference on Multimedia (ACMMM), pp. 1358–1366 (2019)Google Scholar
  46. 46.
    Yuan, M., Peng, Y.: Bridge-GAN: interpretable representation learning for text-to-image synthesis. IEEE Trans. Circuits Syst. Video Technol. (TCSVT) (2019) Google Scholar
  47. 47.
    Yuan, M., Peng, Y.: CKD: cross-task knowledge distillation for text-to-image synthesis. IEEE Trans. Multimedia (TMM) (2019)Google Scholar
  48. 48.
    Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: Proceedings of the 36rd International Conference on Machine Learning (ICML) (2019)Google Scholar
  49. 49.
    Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 5907–5915 (2017)Google Scholar
  50. 50.
    Zhang, H., et al.: StackGAN++: realistic image synthesis with stacked generative adversarial networks. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 41(8), 1947–1962 (2018)CrossRefGoogle Scholar
  51. 51.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2223–2232 (2017)Google Scholar
  52. 52.
    Zhu, M., Pan, P., Chen, W., Yang, Y.: DM-GAN: dynamic memory generative adversarial networks for text-to-image synthesis, pp. 5802–5810 (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.State Key Laboratory of VR Technology and Systems, School of CSEBeihang UniversityBeijingChina
  2. 2.Harbin Institute of TechnologyShenzhenChina
  3. 3.Peng Cheng LaboratoryShenzhenChina

Personalised recommendations