Skip to main content

Generative Models and Unsupervised Learning

  • Chapter
  • First Online:
Geometry of Deep Learning

Part of the book series: Mathematics in Industry ((MATHINDUSTRY,volume 37))

  • 6067 Accesses

Abstract

The last part of our voyage toward the understanding of the geometry of deep learning concerns perhaps the most exciting aspect of deep learning—generative models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 39.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 39.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. P. R. Halmos, Measure theory. Springer, 2013, vol. 18.

    Google Scholar 

  2. S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge University Press, 2004.

    Google Scholar 

  3. Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8789–8797.

    Google Scholar 

  4. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.

    Google Scholar 

  5. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410.

    Google Scholar 

  6. D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” arXiv preprint arXiv:1312.6114, 2013.

    Google Scholar 

  7. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner, “β-VAE: Learning basic visual concepts with a constrained variational framework,” The International Conference on Learning Representations, vol. 2, no. 5, p. 6, 2017.

    Google Scholar 

  8. S. Nowozin, B. Cseke, and R. Tomioka, “f-GAN: Training generative neural samplers using variational divergence minimization,” in Advances in Neural Information Processing Systems, 2016, pp. 271–279.

    Google Scholar 

  9. M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” arXiv preprint arXiv:1701.07875, 2017.

    Google Scholar 

  10. L. Dinh, D. Krueger, and Y. Bengio, “NICE: Non-linear independent components estimation,” arXiv preprint arXiv:1410.8516, 2014.

    Google Scholar 

  11. D. J. Rezende and S. Mohamed, “Variational inference with normalizing flows,” arXiv preprint arXiv:1505.05770, 2015.

    Google Scholar 

  12. L. Dinh, J. Sohl-Dickstein, and S. Bengio, “Density estimation using real NVP,” arXiv preprint arXiv:1605.08803, 2016.

    Google Scholar 

  13. D. P. Kingma and P. Dhariwal, “GLOW: Generative flow with invertible 1 × 1 convolutions,” in Advances in Neural Information Processing Systems, 2018, pp. 10 215–10 224.

    Google Scholar 

  14. C. Villani, Optimal transport: old and new. Springer Science & Business Media, 2008, vol. 338.

    Google Scholar 

  15. M. Cuturi, “Sinkhorn distances: Lightspeed computation of optimal transport,” in Advances in Neural Information Processing Systems, 2013, pp. 2292–2300.

    Google Scholar 

  16. G. Peyré, M. Cuturi et al., “Computational optimal transport,” Foundations and Trends in Machine Learning, vol. 11, no. 5–6, pp. 355–607, 2019.

    Google Scholar 

  17. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.

    Google Scholar 

  18. D. Lee, J. Kim, W.-J. Moon, and J. C. Ye, “CollaGAN: Collaborative GAN for missing image data imputation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2487–2496.

    Google Scholar 

  19. C. Clason, D. A. Lorenz, H. Mahler, and B. Wirth, “Entropic regularization of continuous optimal transport problems,” Journal of Mathematical Analysis and Applications, vol. 494, no. 1, p. 124432, 2021.

    Google Scholar 

  20. T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” arXiv preprint arXiv:1802.05957, 2018.

    Google Scholar 

  21. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of Wasserstein GANs,” in Advances in Neural Information Processing Systems, 2017, pp. 5767–5777.

    Google Scholar 

  22. P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” Journal of Machine Learning Research, vol. 11, no. 12, pp. 3371–3408, 2010.

    Google Scholar 

  23. M. J. Wainwright, M. I. Jordan et al., “Graphical models, exponential families, and variational inference,” Foundations and Trends® in Machine Learning, vol. 1, no. 1–2, pp. 1–305, 2008.

    Google Scholar 

  24. T. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012.

    Google Scholar 

  25. J. Su and G. Wu, “f-VAEs: Improve VAEs with conditional flows,” arXiv preprint arXiv:1809.05861, 2018.

    Google Scholar 

  26. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1125–1134.

    Google Scholar 

  27. B. Sim, G. Oh, J. Kim, C. Jung, and J. C. Ye, “Optimal transport driven CycleGAN for unsupervised learning in inverse problems,” SIAM Journal on Imaging Sciences, vol. 13, no. 4, pp. 2281–2306, 2020.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ye, J.C. (2022). Generative Models and Unsupervised Learning. In: Geometry of Deep Learning. Mathematics in Industry, vol 37. Springer, Singapore. https://doi.org/10.1007/978-981-16-6046-7_13

Download citation

Publish with us

Policies and ethics