Skip to main content
Log in

Edge-based procedural textures

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript


We introduce an edge-based procedural texture (EBPT), a procedural model for semi-stochastic texture generation. EBPT quickly generates large textures from a small input image. EBPT focuses on edges as the visually salient features extracted from the input image and organizes into groups with clearly established spatial properties. EBPT allows the users to interactively or automatically design new textures by utilizing the edge groups. The output texture can be significantly larger than the input, and EBPT does not need multiple textures to mimic the input. EBPT-based texture synthesis consists of two major steps, input analysis and texture synthesis. The input analysis stage extracts edges, builds the edge groups, and stores procedural properties. The texture synthesis stage distributes edge groups with affine transformation. This step can be done interactively or automatically using the procedural model. Then, it generates the output using edge group-based seamless image cloning. We demonstrate our method on various semi-stochastic inputs. With just a few input parameters defining the final structure, our method can analyze the input size of \(512\times {512}\) in 0.7 s and synthesize the output texture of \(2048\times {2048}\) pixels in 0.5 s.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others


  1. Aliaga, D.G., Demir, I., Benes, B., Wand, M.: Inverse procedural modeling of 3d models for virtual worlds. In: ACM SIGGRAPH 2016 Courses, SIGGRAPH ’16, pp. 16:1–16:316. ACM, New York, NY, USA (2016).

  2. Barla, P., Breslav, S., Thollot, J., Sillion, F., Markosian, L.: Stroke pattern analysis and synthesis. In: Computer Graphics Forum, vol. 25, pp. 663–671. Wiley Online Library (2006)

  3. Barnes, C., Zhang, F.L.: A survey of the state-of-the-art in patch-based synthesis. Comput. Vis. Media 3(1), 3–20 (2017).

    Article  Google Scholar 

  4. Biederman, I., Ju, G.: Surface versus edge-based determinants of visual recognition. Cogn. Psychol. 20(1), 38–64 (1988)

    Article  Google Scholar 

  5. Boykov, Y., Funka-Lea, G.: Graph cuts and efficient nd image segmentation. Int. J. Comput. Vis. 70(2), 109–131 (2006)

    Article  Google Scholar 

  6. Chiu, S.N., Stoyan, D., Kendall, W.S., Mecke, J.: Stochastic Geometry and Its Applications. Wiley, London (2013)

  7. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation. arXiv e-prints arXiv:1711.09020 (2017)

  8. Darabi, S., Shechtman, E., Barnes, C., Goldman, D.B., Sen, P.: Image melding: Combining inconsistent images using patch-based synthesis. ACM Trans. Graph. 31(4), 82:1–82:10 (2012).

  9. Deng, G.: Guided wavelet shrinkage for edge-aware smoothing. IEEE Trans. Image Process. 26(2), 900–914 (2017).

    Article  MathSciNet  MATH  Google Scholar 

  10. Dollár, P., Zitnick, C.L.: Structured forests for fast edge detection. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 1841–1848 (2013).

  11. Dollár, P., Zitnick, C.L.: Fast edge detection using structured forests. IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1558–1570 (2015).

    Article  Google Scholar 

  12. Emilien, A., Vimont, U., Cani, M.P., Poulin, P., Benes, B.: Worldbrush: Interactive example-based synthesis of procedural virtual worlds. ACM Trans. Graph. 34(4), 106:1–106:11 (2015).

  13. Freeman, H.: On the encoding of arbitrary geometric configurations. IRE Trans. Electron. Comput. EC-10(2), 260–268 (1961).

  14. Galerne, B., Lagae, A., Lefebvre, S., Drettakis, G.: Gabor noise by example. ACM Trans. Graph. 31(4), 73:1–73:9 (2012).

  15. Gastal, E.S.L., Oliveira, M.M.: Domain transform for edge-aware image and video processing. In: ACM SIGGRAPH 2011 Papers, SIGGRAPH ’11, pp. 69:1–69:12. ACM, New York, NY, USA (2011).

  16. Gilet, G., Dischler, J.M.: An image-based approach for stochastic volumetric and procedural details. In: Proceedings of the 21st Eurographics Conference on Rendering, EGSR’10, pp. 1411–1419. Eurographics Association, Aire-la-Ville, Switzerland, Switzerland (2010).

  17. Gilet, G., Sauvage, B., Vanhoey, K., Dischler, J.M., Ghazanfarpour, D.: Local random-phase noise for procedural texturing. ACM Trans. Graph. 33(6), 195:1–195:11 (2014).

  18. Glondu, L., Muguercia, L., Marchal, M., Bosch, C., Rushmeier, H., Dumont, G., Drettakis, G.: Example-based fractured appearance. Comput. Graph Forum 31(4), 1547–1556 (2012).

    Article  Google Scholar 

  19. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, K.Q. Weinberger (eds.) Advances in Neural Information Processing Systems 27, pp. 2672–2680. Curran Associates, Inc. (2014).

  20. Guehl, P., Allegre, R., Dischler, J.M., Benes, B., Galin, E.: Semi-procedural textures using point process texture basis functions. Comput. Graph. Forum 39(4), 159–171 (2020). (Honorable mention from the Best Papers Committee)

  21. Guehl, P., Allegre, R., Dischler, J.M., Benes, B., Galin, E.: Semi-procedural textures using point process texture basis functions. Comput. Graph. Forum 39(4), 159–171 (2020).

    Article  Google Scholar 

  22. Guérin, E., Digne, J., Galin, E., Peytavie, A., Wolf, C., Benes, B., Martinez, B.: Interactive example-based terrain authoring with conditional generative adversarial networks. ACM Trans. Graph. 36(6), 228:1–228:13 (2017).

  23. Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Inform. Theory 8(2), 179–187 (1962)

    Article  Google Scholar 

  24. Hu, S.M., Zhang, F.L., Wang, M., Martin, R.R., Wang, J.: Patchnet: A patch-based image representation for interactive library-driven image editing. ACM Trans. Graph. 32(6), 196:1–196:12 (2013).

  25. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)

  26. Hurtut, T., Landes, P.E., Thollot, J., Gousseau, Y., Drouillhet, R., Coeurjolly, J.F.: Appearance-guided synthesis of element arrangements by example. In: Proceedings of the 7th International Symposium on Non-photorealistic Animation and Rendering, NPAR ’09, pp. 51–60. ACM, New York, NY, USA (2009).

  27. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

  28. Kaspar, A., Neubert, B., Lischinski, D., Pauly, M., Kopf, J.: Self tuning texture optimization. Comput. Graph. Forum 34(2), 349–359 (2015).

    Article  Google Scholar 

  29. Kingma, D.P., Welling, M.: Auto-Encoding Variational Bayes. arXiv e-prints arXiv:1312.6114 (2013)

  30. Kwatra, V., Essa, I., Bobick, A., Kwatra, N.: Texture optimization for example-based synthesis. In: ACM SIGGRAPH 2005 Papers, SIGGRAPH ’05, pp. 795–802. ACM, New York, NY, USA (2005).

  31. Lockerman, Y.D., Sauvage, B., Allègre, R., Dischler, J.M., Dorsey, J., Rushmeier, H.: Multi-scale label-map extraction for texture synthesis. ACM Trans. Graph. 35(4), 140:1–140:12 (2016).

  32. Lukáč, M., Fišer, J., Asente, P., Lu, J., Shechtman, E., Sýkora, D.: Brushables: Example-based edge-aware directional texture painting. Comput. Graph. Forum 34(7), 257–267 (2015).

    Article  Google Scholar 

  33. Lukáč, M., Fišer, J., Bazin, J.C., Jamriška, O., Sorkine-Hornung, A., Sýkora, D.: Painting by feature: Texture boundaries for example-based image creation. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2013) 32(4), 116 (2013)

  34. Ma, C., Wei, L.Y., Tong, X.: Discrete element textures. ACM Trans. Graph. 30(4), 62:1–62:10 (2011).

  35. Ma, C., Wei, L.Y., Tong, X.: Discrete element textures. In: ACM SIGGRAPH 2011 Papers, SIGGRAPH ’11, pp. 62:1–62:10. ACM, New York, NY, USA (2011).

  36. Marr, D., Hildreth, E.: Theory of edge detection. Proc. R. Soc. Lond. B Biol. Sci. 207(1167), 187–217 (1980)

    Article  Google Scholar 

  37. Mould, D.: Image-guided fracture. In: Proceedings of Graphics Interface 2005, GI ’05, pp. 219–226. Canadian Human-Computer Communications Society, School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada (2005).

  38. Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Trans. Graph. 22(3), 313–318 (2003).

    Article  Google Scholar 

  39. Portilla, J., Simoncelli, E.P.: A parametric texture model based on joint statistics of complex wavelet coefficients. Int. J. Comput. Vis. 40(1), 49–70 (2000).

    Article  MATH  Google Scholar 

  40. Radford, A., Metz, L., Chintala, S.: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv e-prints arXiv:1511.06434 (2015)

  41. Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Scribbler: Controlling deep image synthesis with sketch and color. (2016). arXiv:1612.00835

  42. Sibbing, D., Pavić, D., Kobbelt, L.: Image synthesis for branching structures. In: Computer Graphics Forum, vol. 29, pp. 2135–2144. Wiley Online Library (2010)

  43. Sohl-Dickstein, J., Weiss, E.A., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. arXiv e-prints arXiv:1503.03585 (2015)

  44. Štǎva, O., Benes, B., Měch, R., Aliaga, D.G., Krištof, P.: Inverse procedural modeling by automatic generation of l-systems. Comput. Graph. Forum 29(2), 665–674 (2010).

    Article  Google Scholar 

  45. Štǎva, O., Pirk, S., Kratt, J., Chen, B., Měch, R., Deussen, O., Benes, B.: Inverse procedural modelling of trees. Comput. Graph. Forum 33(6), 118–131 (2014).

    Article  Google Scholar 

  46. Wei, L.Y., Lefebvre, S., Kwatra, V., Turk, G.: State of the art in example-based texture synthesis. In: Eurographics 2009, State of the Art Report, EG-STAR, pp. 93–117. Eurographics Association (2009)

  47. Wu, F., Yan, D.M., Dong, W., Zhang, X., Wonka, P.: Inverse procedural modeling of facade layouts. ACM Trans. Graph. 33(4), 121:1–121:10 (2014).

  48. Wu, Q., Yu, Y.: Feature matching and deformation for texture synthesis. In: ACM SIGGRAPH 2004 Papers, SIGGRAPH ’04, pp. 364–367. ACM, New York, NY, USA (2004).

  49. Wu, R., Wang, W., Yu, Y.: Optimized synthesis of art patterns and layered textures. IEEE Trans. Visual Comput. Graph. 20(3), 436–446 (2014).

    Article  Google Scholar 

  50. Xian, W., Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Texturegan: Controlling deep image synthesis with texture patches. (2017). arXiv:1706.02823

  51. Xu, L., Ren, J.S.J., Yan, Q., Liao, R., Jia, J.: Deep edge-aware filters. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning—Volume 37, ICML’15, pp. 1669–1678. (2015).

  52. Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., Metaxas, D.N.: Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5907–5915 (2017)

  53. Zhou, H., Sun, J., Turk, G., Rehg, J.M.: Terrain synthesis from digital elevation models. IEEE Trans. Vis. Comput. Graph. 13(4), 834–848 (2007)

    Article  Google Scholar 

  54. Zhou, Y., Shi, H., Lischinski, D., Gong, M., Kopf, J., Huang, H.: Analysis and controlled synthesis of inhomogeneous textures. Computer Graphics Forum (Proc. of Eurographics 2017) 36(2) (2017)

  55. Zhou, Y., Zhu, Z., Bai, X., Lischinski, D., Cohen-Or, D., Huang, H.: Non-stationary texture synthesis by adversarial expansion. ACM Trans. Graph. 37(4), 49:1–49:13 (2018).

  56. Zhu, J., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. (2017). arXiv:1711.11586

Download references


This research was funded in part by National Science Foundation Grant No. 10001387, Functional Proceduralization of 3D Geometric Models, and National Science Foundation Grant No. 1608762, Inverse Procedural Material Modeling for Battery Design. We thank Dr. Darrell Schulze for his unconditional support and help through this project.


This research was funded by National Science Foundation Grant No. 10001387, Functional Proceduralization of 3D Geometric Models, and National Science Foundation Grant No. 1608762, Inverse Procedural Material Modeling for Battery Design.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Hansoo Kim.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, H., Dischler, JM., Rushmeier, H. et al. Edge-based procedural textures. Vis Comput 37, 2595–2606 (2021).

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: