Skip to main content

A New Family of Generative Adversarial Nets Using Heterogeneous Noise to Model Complex Distributions

  • Conference paper
  • First Online:
AI 2018: Advances in Artificial Intelligence (AI 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11320))

Included in the following conference series:

  • 2286 Accesses

Abstract

Generative adversarial nets (GANs) are effective framework for constructing data models and enjoys desirable theoretical justification. On the other hand, realizing GANs for practical complex data distribution often requires careful configuration of the generator, discriminator, objective function and training method and can involve much non-trivial effort.

We propose an novel family of generative adversarial nets (GANs), where we employ both continuous noise and random binary codes in the generating process. The binary codes in the new GAN model (named BGANs) play the role of categorical latent variables helps improve the model capability and training stability when dealing with complex data distributions. BGAN has been evaluated and compared with existing GANs trained with the state-of-the-art method on both synthetic and practical data. The empirical evaluation shows effectiveness of BGAN.

Supported by Science and Technology Program of Guangzhou (No.201704030133), “A Knowledge-Connection and Cognitive-Style based Mining System for Massive Open Online Courses and Its Application” (UTS Project Code: PRO16-1300); Education Department Foundation of Guangdong Province 2017KTSCX112.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Breuleux, O., Bengio, Y., Vincent, P.: Unlearning for better mixing. Technical Report 1349, Université de Montréal/DIRO (2010)

    Google Scholar 

  2. Breuleux, O., Bengio, Y., Vincent, P.: Quickly generating representative samples from an RBM-derived process. Neural Comput. 23(8), 2058–2073 (2011)

    Article  MathSciNet  Google Scholar 

  3. Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., Choo, J.: Stargan: unified generative adversarial networks for multi-domain image-to-image translation. arXiv preprint arXiv:1711.09020 (2017)

  4. Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a laplacian pyramid of adversarial networks, pp. 1486–1494 (2015)

    Google Scholar 

  5. Garimella, S., Hermansky, H.: Factor analysis of auto-associative neural networks with application in speaker verification. IEEE Trans. Neural Netw. Learn. Syst. 24(4), 522–528 (2013)

    Article  Google Scholar 

  6. Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. CoRR, abs/1508.06576 (2015)

    Google Scholar 

  7. Goodfellow, I.J.: NIPS 2016 tutorial: generative adversarial networks. CoRR, abs/1701.00160 (2017)

    Google Scholar 

  8. Goodfellow, I.J., Mirza, M., Courville, A.C., Bengio, Y.: Multi-prediction deep Boltzmann machines. In: NIPS, pp. 548–556 (2013)

    Google Scholar 

  9. Goodfellow, I.J., et al.: Generative adversarial networks. CoRR, abs/1406.2661 (2014)

    Google Scholar 

  10. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein gans. In: NIPS, pp. 5769–5779 (2017)

    Google Scholar 

  11. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  12. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  Google Scholar 

  13. Hoang, Q., Nguyen, T.D., Le, T., Phung, D.: MGAN: training generative adversarial nets with multiple generators. In: ICLR, p. 24 (2018)

    Google Scholar 

  14. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 1125–1134 (2017)

    Google Scholar 

  15. Khosrowabadi, R., Quek, C., Ang, K.K., Wahab, A.: ERNN: a biologically inspired feedforward neural network to discriminate emotion from EEG signal. IEEE Trans. Neural Netw. Learn. Syst. 25(3), 609–620 (2014)

    Article  Google Scholar 

  16. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. CoRR, abs/1312.6114 (2013)

    Google Scholar 

  17. Krizhevsky, A., Nair, V., Hinton, G.: The CIFAR-10 dataset (2014)

    Google Scholar 

  18. LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  19. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  20. Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: NIPS, pp. 700–708 (2017)

    Google Scholar 

  21. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  22. Maskin, E.: Nash equilibrium and welfare optimality. Rev. Econ. Stud. 66(1), 23–38 (1999)

    Article  MathSciNet  Google Scholar 

  23. Mirza, M., Osindero, S.: Conditional generative adversarial nets. CoRR, abs/1411.1784 (2014)

    Google Scholar 

  24. Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: ICML, pp. 807–814 (2010)

    Google Scholar 

  25. Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML, pp. 2642–2651 (2017)

    Google Scholar 

  26. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434 (2015)

    Google Scholar 

  27. Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In: ICML, pp. 1060–1069 (2016)

    Google Scholar 

  28. Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015)

    Google Scholar 

  29. Schwarz, B., Richardson, M.: Using a de-convolution window for operating modal analysis. In: Proceedings of the IMAC. Citeseer (2007)

    Google Scholar 

  30. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)

    Google Scholar 

  31. Song, J., He, T., Gao, L., Xu, X., Hanjalic, A., Shen, H.T.: Binary generative adversarial networks for image retrieval. In: AAAI (2018)

    Google Scholar 

  32. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  33. Sutskever, I., Hinton, G.E., Taylor, G.W.: The recurrent temporal restricted Boltzmann machine. In: NIPS, pp. 1601–1608 (2008)

    Google Scholar 

  34. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS, pp. 3104–3112 (2014)

    Google Scholar 

  35. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, pp. 4278–4284 (2017)

    Google Scholar 

  36. Cortes, C., LeCun, Y., Burges, C.J.C.: The MNIST database of handwritten digits (1998)

    Google Scholar 

  37. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenyuan Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, A., Li, J., Zhang, L., Shi, L., Ma, Z. (2018). A New Family of Generative Adversarial Nets Using Heterogeneous Noise to Model Complex Distributions. In: Mitrovic, T., Xue, B., Li, X. (eds) AI 2018: Advances in Artificial Intelligence. AI 2018. Lecture Notes in Computer Science(), vol 11320. Springer, Cham. https://doi.org/10.1007/978-3-030-03991-2_63

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03991-2_63

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03990-5

  • Online ISBN: 978-3-030-03991-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics