Generating Low-Rank Textures via Generative Adversarial Network
Achieving structured low-rank representation from the original image is a challenging and significant task, owing to the capacity of the low-rank structure in expressing structured information from the real world. It is noteworthy that, most of the existing methods to obtain the low-rank textures, treat this issue as a “transformational problem”, which lead to the poor quality of the images with complex backgrounds. In order to jump out of this interference, we try to explore this issue as a “generative problem” and propose the Low-rank texture Generative Adversarial Network (LR-GAN) using an unsupervised image-to-image network. Our method generates the high-quality low-rank texture gradually from the low-rank constraint after many iterations of training. Considering that the low-rank constraint is difficult to optimize (NP-hard problem) in the loss function, we introduce the layer of the low-rank gradient filter to approach the optimal low-rank solution. Experimental results demonstrate that the proposed method is effective on both synthetic and real world images.
KeywordsGenerative adversarial network Low-rank texture generative adversarial network Structured low-rank representation Low-rank constraint
This work was supported by the National Natural Science Foundation of China (No. 61271374).
- 4.Cheng, L., Gong, J., Li, M., et al.: 3D building model reconstruction from multi-view aerial imagery and Lidar data. Acta Geodaetica Cartogr. Sin. 77(2), 125–139 (2009)Google Scholar
- 7.Zhang, Y., Jiang, Z., Davis, L.S.: Learning structured low-rank representations for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 676–683. IEEE (2013)Google Scholar
- 8.Lin, Z., Liu, R., Su, Z.: Linearized alternating direction method with adaptive penalty for low-rank representation. Neural Inf. Process. Syst. 24, 612–620 (2011)Google Scholar
- 9.Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06434 (2015)
- 10.Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on International Conference on Machine Learning (ICML), pp. 807–814 (2010)Google Scholar
- 11.Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on International Conference on Machine Learning (ICML), pp. 448–456 (2015)Google Scholar
- 12.Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoust models. In: International Conference on International Conference on Machine Learning (ICML) (2013)Google Scholar
- 13.Mao, X., Li, Q., Xie, H., et al.: Least squares generative adversarial networks. arXiv preprint arXiv:1611.04076 (2016)
- 14.Zhao, S.Y., Li, W.J.: Fast asynchronous parallel stochastic gradient decent. arXiv preprint arXiv:1508.05711 (2015)
- 15.Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Neural Information Processing Systems, pp. 2017–2025 (2015)Google Scholar
- 16.Netzer, Y., Wang, T., Coates, A., et al.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning Unsupervised Feature Learning (2012)Google Scholar