Skip to main content
Log in

DLP-GAN: learning to draw modern Chinese landscape photos with generative adversarial network

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Chinese landscape painting has a unique and artistic style, and its drawing technique is highly abstract in both the use of color and the realistic representation of objects. Previous methods focus on transferring from modern photos to ancient ink paintings. However, little attention has been paid to translating landscape paintings into modern photos. To solve such problems, in this paper, we (1) propose DLP-GAN (Draw Modern Chinese Landscape Photos with Generative Adversarial Network), an unsupervised cross-domain image translation framework with a novel asymmetric cycle mapping, and (2) introduce a generator based on a dense-fusion module to match different translation directions. Moreover, a dual-consistency loss is proposed to balance the realism and abstraction of model painting. In this way, our model can draw landscape photos and sketches in the modern sense. Finally, based on our collection of modern landscape and sketch datasets, we compare the images generated by our model with other benchmarks. Extensive experiments including user studies show that our model outperforms state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Liu L (2021) The basic features of traditional Chinese landscape painting. In: The 5th international conference on art studies: research, experience, education (ICASSEE 2021), vol. 1, pp 17–27 . https://doi.org/10.5117/9789048557240/ICASSEE.2021.003. Amsterdam University Press

  2. Li Y, Fang C, Yang J, Wang Z, Lu X, Yang M-H (2017) Universal style transfer via feature transforms. Adv Neural Inf Process Syst 30

  3. Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2414–2423 . https://doi.org/10.1109/cvpr.2016.265

  4. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision, pp 694–711. https://doi.org/10.1007/978-3-319-46475-6_43. Springer

  5. Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232 . https://doi.org/10.1109/iccv.2017.244

  6. Zhu J-Y, Zhang R, Pathak D, Darrell T, Efros AA, Wang O, Shechtman E (2017) Toward multimodal image-to-image translation. Adv Neural Inf Process Syst 30

  7. Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134 . https://doi.org/10.1109/cvpr.2017.632

  8. Li R, Wu C-H, Liu S, Wang J, Wang G, Liu G, Zeng B (2020) Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Trans Image Process 30:374–385. https://doi.org/10.1109/TIP.2020.3036754

    Article  ADS  PubMed  Google Scholar 

  9. Lin T, Ma Z, Li F, He D, Li X, Ding E, Wang N, Li J, Gao X (2021) Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5141–5150 . https://doi.org/10.1109/cvpr46437.2021.00510

  10. Liu S, Lin T, He D, Li F, Wang M, Li X, Sun Z, Li Q, Ding E (2021) Adaattn: revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6649–6658 . https://doi.org/10.1109/iccv48922.2021.00658

  11. Peng X, Peng S, Hu Q, Peng J, Wang J, Liu X, Fan J (2022) Contour-enhanced cyclegan framework for style transfer from scenery photos to Chinese landscape paintings. Neural Comput Appl 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w

  12. Zheng C, Zhang Y (2018) Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th international symposium on pervasive systems, algorithms and networks (I-SPAN), pp 193–200. https://doi.org/10.1109/i-span.2018.00039. IEEE

  13. Zhou L, Wang Q-F, Huang K, Lo C-H (2019) An interactive and generative approach for Chinese Shanshui painting document. In: 2019 International conference on document analysis and recognition (ICDAR), pp 819–824. https://doi.org/10.1109/icdar.2019.00136. IEEE

  14. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM 63(11):139–144. https://doi.org/10.1145/3422622

    Article  MathSciNet  Google Scholar 

  15. Bharti V, Biswas B, Shukla KK (2022) Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Comput Appl 34(24):21433–21447. https://doi.org/10.1007/s00521-021-05975-y

    Article  Google Scholar 

  16. He B, Gao F, Ma D, Shi B, Duan L-Y (2018) Chipgan: a generative adversarial network for Chinese ink wash painting style transfer. In: Proceedings of the 26th ACM international conference on multimedia, pp 1172–1180. https://doi.org/10.1145/3240508.3240655

  17. Wang W, Li Y, Ye H, Ye F, Xu X (2022) Ink painting style transfer using asymmetric cycle-consistent GAN. Available at SSRN 4109972 . https://doi.org/10.2139/ssrn.4109972

  18. Li B, Xiong C, Wu T, Zhou Y, Zhang L, Chu R (2018) Neural abstract style transfer for Chinese traditional painting. In: Asian conference on computer vision, pp 212–227 . https://doi.org/10.1007/978-3-030-20890-5_14. Springer

  19. Qiao T, Zhang W, Zhang M, Ma Z, Xu D (2019) Ancient painting to natural image: a new solution for painting processing. In: 2019 IEEE winter conference on applications of computer vision (WACV), pp 521–530. https://doi.org/10.1109/wacv.2019.00061

  20. Qin S, Liu S (2022) Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Comput Appl 34(24):21551–21566. https://doi.org/10.1007/s00521-021-06147-8

    Article  Google Scholar 

  21. Sun H, Wu L, Li X, Meng X (2022) Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 international conference on multimedia retrieval, pp 277–285. https://doi.org/10.1145/3512527.3531391

  22. Li J, Wang Q, Li S, Zhong Q, Zhou Q (2021) Immersive traditional Chinese portrait painting: research on style transfer and face replacement. In: Chinese conference on pattern recognition and computer vision (PRCV), pp 192–203. https://doi.org/10.1007/978-3-030-88007-1_16. Springer

  23. Xue A (2021) End-to-end Chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 3863–3871. https://doi.org/10.1109/wacv48630.2021.00391

  24. Dhariwal P, Nichol A (2021) Diffusion models beat GANs on image synthesis. Adv Neural Inf Process Syst 34:8780–8794

    Google Scholar 

  25. Ho J, Jain A, Abbeel P (2020) Denoising diffusion probabilistic models. Adv Neural Inf Process Syst 33:6840–6851

    Google Scholar 

  26. Saharia C, Chan W, Chang H, Lee C, Ho J, Salimans T, Fleet D, Norouzi M (2022) Palette: image-to-image diffusion models. In: ACM SIGGRAPH 2022 conference proceedings, pp 1–10. https://doi.org/10.1145/3528233.3530757

  27. Su X, Song J, Meng C, Ermon S (2022) Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382. https://doi.org/10.48550/arXiv.2203.08382

  28. Li B, Xue K, Liu B, Lai Y-K (2023) Bbdm: image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern Recognition, pp 1952–1961

  29. Li H, Wu X-J (2018) Densefuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623. https://doi.org/10.1109/tip.2018.2887342

    Article  ADS  MathSciNet  Google Scholar 

  30. Wang T-C, Liu M-Y, Zhu J-Y, Tao A, Kautz J, Catanzaro B (2018) High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8798–8807. https://doi.org/10.1109/cvpr.2018.00917

  31. Huang X, Liu M-Y, Belongie S, Kautz J (2018) Multimodal unsupervised image-to-image translation. In: Proceedings of the European conference on computer vision (ECCV), pp 172–189. https://doi.org/10.1007/978-3-030-01219-9_11

  32. Zhang F, Gao H, Lai Y (2020) Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8:132002–132011. https://doi.org/10.1109/access.2020.3009470

    Article  Google Scholar 

  33. Chung C-Y, Huang S-H (2022) Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools Appl 1–34. https://doi.org/10.1007/s11042-022-13684-4

  34. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778. https://doi.org/10.1109/cvpr.2016.90

  35. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708. https://doi.org/10.1109/cvpr.2017.243

  36. Mao X, Li Q, Xie H, Lau RY, Wang Z, Paul Smolley S (2017) Least squares generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2794–2802. https://doi.org/10.1109/iccv.2017.304

  37. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://doi.org/10.48550/arXiv.1409.1556

  38. Poma XS, Riba E, Sappa A (2020) Dense extreme inception network: towards a robust CNN model for edge detection. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 1923–1932. https://doi.org/10.1109/wacv45572.2020.9093290

  39. Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 586–595. https://doi.org/10.1109/cvpr.2018.00068

  40. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L (2019) Pytorch: an imperative style, high-performance deep learning library. Adv Neural Inf Process Syst 32

  41. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. https://doi.org/10.48550/arXiv.1412.6980

  42. Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE international conference on computer vision, pp 1501–1510. https://doi.org/10.1109/iccv.2017.167

  43. Dou H, Chen C, Hu X, Jia L, Peng S (2020) Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415:114–122. https://doi.org/10.1016/j.neucom.2020.07.044

    Article  Google Scholar 

  44. Peng Z, Wang H, Weng Y, Yang Y, Shao T (2023) Unsupervised image translation with distributional semantics awareness. Comput Vis Media 9(3):619–631. https://doi.org/10.1007/s41095-022-0295-3

    Article  Google Scholar 

  45. Liu M-Y, Breuel T, Kautz J (2017) Unsupervised image-to-image translation networks. Adv Neural Inf Process Syst 30

  46. Tang H, Liu H, Xu D, Torr PH, Sebe N (2021) Attentiongan: unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2021.3105725

    Article  PubMed  Google Scholar 

  47. Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv Neural Inf Process Syst 30

  48. Bińkowski M, Sutherland DJ, Arbel M, Gretton A (2018) Demystifying MMD GANs. arXiv preprint arXiv:1801.01401. https://doi.org/10.48550/arXiv.1801.01401

  49. Hore A, Ziou D (2010) Image quality metrics: Psnr vs. ssim. In: 2010 20th international conference on pattern recognition, pp 2366–2369. https://doi.org/10.1109/icpr.2010.579. IEEE

Download references

Acknowledgements

This work was supported by the Key Research and Development Program of Gansu Province (No. 22YF7GA159), Soft Science Special Project of Gansu Basic Research Plan (No. 22JR4ZA084), Industry Support Program of Gansu Provincial Department of Education (No. 2023CYZC-25), the National Key Research and Development Program of China (No. 2021ZD0111405), the Key Research and Development Program of Gansu Province (No. 21YF5GA103, No. 21YF5FA111), Lanzhou Science and Technology Planning Project (No. 2021-1-183), and Lanzhou Talent Innovation and Entrepreneurship Project (No. 2021-RC-91). The authors gratefully acknowledge the anonymous reviewers for their helpful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Binxuan Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest to this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gui, X., Zhang, B., Li, L. et al. DLP-GAN: learning to draw modern Chinese landscape photos with generative adversarial network. Neural Comput & Applic 36, 5267–5284 (2024). https://doi.org/10.1007/s00521-023-09345-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09345-8

Keywords

Navigation