Advertisement

Generating Low-Rank Textures via Generative Adversarial Network

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10636)

Abstract

Achieving structured low-rank representation from the original image is a challenging and significant task, owing to the capacity of the low-rank structure in expressing structured information from the real world. It is noteworthy that, most of the existing methods to obtain the low-rank textures, treat this issue as a “transformational problem”, which lead to the poor quality of the images with complex backgrounds. In order to jump out of this interference, we try to explore this issue as a “generative problem” and propose the Low-rank texture Generative Adversarial Network (LR-GAN) using an unsupervised image-to-image network. Our method generates the high-quality low-rank texture gradually from the low-rank constraint after many iterations of training. Considering that the low-rank constraint is difficult to optimize (NP-hard problem) in the loss function, we introduce the layer of the low-rank gradient filter to approach the optimal low-rank solution. Experimental results demonstrate that the proposed method is effective on both synthetic and real world images.

Keywords

Generative adversarial network Low-rank texture generative adversarial network Structured low-rank representation Low-rank constraint 

Notes

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61271374).

References

  1. 1.
    Yang, S., Wei, E., Guan, R., et al.: Triangle chain codes for image matching. Neurocomputing 120(10), 268–276 (2013)CrossRefGoogle Scholar
  2. 2.
    Brown, M., Lowe, D.G.: Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 74(1), 59–73 (2007)CrossRefGoogle Scholar
  3. 3.
    Han, J., Farin, D., de With, P.H.N.: A mixed-reality system for broadcasting sports video to mobile devices. IEEE Multimedia 18(2), 72–84 (2010)CrossRefGoogle Scholar
  4. 4.
    Cheng, L., Gong, J., Li, M., et al.: 3D building model reconstruction from multi-view aerial imagery and Lidar data. Acta Geodaetica Cartogr. Sin. 77(2), 125–139 (2009)Google Scholar
  5. 5.
    Zhang, Z., Liang, X., Ganesh, A., Ma, Y.: TILT: Transform Invariant Low-Rank Textures. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010. LNCS, vol. 6494, pp. 314–328. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-19318-7_25 CrossRefGoogle Scholar
  6. 6.
    Zhang, Q., Li, Y., Blum, R.S., et al.: Matching of images with projective distortion using transform invariant low-rank textures. J. Vis. Commun. Image Represent. 38(C), 602–613 (2016)CrossRefGoogle Scholar
  7. 7.
    Zhang, Y., Jiang, Z., Davis, L.S.: Learning structured low-rank representations for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 676–683. IEEE (2013)Google Scholar
  8. 8.
    Lin, Z., Liu, R., Su, Z.: Linearized alternating direction method with adaptive penalty for low-rank representation. Neural Inf. Process. Syst. 24, 612–620 (2011)Google Scholar
  9. 9.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06434 (2015)
  10. 10.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on International Conference on Machine Learning (ICML), pp. 807–814 (2010)Google Scholar
  11. 11.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on International Conference on Machine Learning (ICML), pp. 448–456 (2015)Google Scholar
  12. 12.
    Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoust models. In: International Conference on International Conference on Machine Learning (ICML) (2013)Google Scholar
  13. 13.
    Mao, X., Li, Q., Xie, H., et al.: Least squares generative adversarial networks. arXiv preprint arXiv:1611.04076 (2016)
  14. 14.
    Zhao, S.Y., Li, W.J.: Fast asynchronous parallel stochastic gradient decent. arXiv preprint arXiv:1508.05711 (2015)
  15. 15.
    Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Neural Information Processing Systems, pp. 2017–2025 (2015)Google Scholar
  16. 16.
    Netzer, Y., Wang, T., Coates, A., et al.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning Unsupervised Feature Learning (2012)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science and TechnologyBeijing Institute of TechnologyBeijingChina

Personalised recommendations