Computational Visual Media

, Volume 3, Issue 1, pp 61–71 | Cite as

Boundary-aware texture region segmentation from manga

  • Xueting Liu
  • Chengze Li
  • Tien-Tsin Wong
Open Access
Research Article


Due to the lack of color in manga (Japanese comics), black-and-white textures are often used to enrich visual experience. With the rising need to digitize manga, segmenting texture regions from manga has become an indispensable basis for almost all manga processing, from vectorization to colorization. Unfortunately, such texture segmentation is not easy since textures in manga are composed of lines and exhibit similar features to structural lines (contour lines). So currently, texture segmentation is still manually performed, which is labor-intensive and time-consuming. To extract a texture region, various texture features have been proposed for measuring texture similarity, but precise boundaries cannot be achieved since boundary pixels exhibit different features from inner pixels. In this paper, we propose a novel method which also adopts texture features to estimate texture regions. Unlike existing methods, the estimated texture region is only regarded an initial, imprecise texture region. We expand the initial texture region to the precise boundary based on local smoothness via a graph-cut formulation. This allows our method to extract texture regions with precise boundaries. We have applied our method to various manga images and satisfactory results were achieved in all cases.


manga texture segmentation 



This project was supported by the National Natural Science Foundation of China (Project No. 61272293), and Research Grants Council of the Hong Kong Special Administrative Region under RGC General Research Fund (Project Nos. CUHK14200915 and CUHK14217516).


  1. [1]
    Tuytelaars, T.; Mikolajczyk, K. Local invariant feature detectors: A survey. Foundations and Trends in Computer Graphics and Vision Vol. 3, No. 3, 177–280, 2008.CrossRefGoogle Scholar
  2. [2]
    Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure extraction from texture via relative total variation. ACM Transactions on Graphics Vol. 31, No. 6, Article No. 139, 2012.Google Scholar
  3. [3]
    Julesz, B. Textons, the elements of texture perception, and their interactions. Nature Vol. 290, 91–97, 1981.CrossRefGoogle Scholar
  4. [4]
    Weldon, T. P.; Higgins, W. E.; Dunn, D. F. Efficient Gabor filter design for texture segmentation. Pattern Recognition Vol. 29, No. 12, 2005–2015, 1996.CrossRefGoogle Scholar
  5. [5]
    Varma, M.; Zisserman, A. Texture classification: Are filter banks necessary? In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, II-691-8, 2003.Google Scholar
  6. [6]
    Qu, Y.; Wong, T.-T.; Heng, P.-A. Manga colorization. ACM Transactions on Graphics Vol. 25, No. 3, 1214–1220, 2006.CrossRefGoogle Scholar
  7. [7]
    Hofmann, T.; Puzicha, J.; Buhmann, J. M. Unsupervised texture segmentation in a deterministic annealing framework. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 20, No. 8, 803–818, 1998.CrossRefGoogle Scholar
  8. [8]
    Paragios, N.; Deriche, R. Geodesic active regions for supervised texture segmentation. In: Proceedings of the 7th IEEE International Conference on Computer Vision, Vol. 2, 926–932, 1999.CrossRefzbMATHGoogle Scholar
  9. [9]
    Hays, J.; Leordeanu, M.; Efros, A. A.; Liu, Y. Discovering texture regularity as a higher-order correspondence problem. In: Computer Vision–ECCV 2006. Leonardis, A.; Bischof, H.; Pinz, A. Eds. Springer Berlin Heidelberg, 522–535, 2006.CrossRefGoogle Scholar
  10. [10]
    Liu, Y.; Collins, R. T.; Tsin, Y. A computational model for periodic pattern perception based on frieze and wallpaper groups. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 26, No. 3, 354–371, 2004.CrossRefGoogle Scholar
  11. [11]
    Liu, Y.; Lin, W.-C.; Hays, J. Near-regular texture analysis and manipulation. ACM Transactions on Graphics Vol. 23, No. 3, 368–376, 2004.CrossRefGoogle Scholar
  12. [12]
    Liuy, Y.; Belkina, T.; Hays, J. H.; Lublinerman, R. Image de-fencing. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1–8, 2008.Google Scholar
  13. [13]
    Siddiqui, H.; Boutin, M.; Bouman, C. A. Hardwarefriendly descreening. IEEE Transactions on Image Processing Vol. 19, No. 3, 746–757, 2010.MathSciNetCrossRefzbMATHGoogle Scholar
  14. [14]
    Kopf, J.; Lischinski, D. Digital reconstruction of halftoned color comics. ACM Transactions on Graphics Vol. 31, No. 6, Article No. 140, 2012.Google Scholar
  15. [15]
    Yao, C.-Y.; Hung, S.-H.; Li, G.-W.; Chen, I.-Y.; Adhitya, R.; Lai, Y.-C. Manga vectorization and manipulation with procedural simple screentone. IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 2, 1070–1084, 2017.CrossRefGoogle Scholar
  16. [16]
    Aujol, J.-F.; Gilboa, G.; Chan, T.; Osher, S. Structuretexture image decomposition—Modeling, algorithms, and parameter selection. International Journal of Computer Vision Vol. 67, No. 1, 111–136, 2006.CrossRefzbMATHGoogle Scholar
  17. [17]
    Meyer, Y. Oscillating Patterns in Image Processing and Nonlinear Evolution Equations: The Fifteenth Dean Jacqueline B. Lewis Memorial Lectures. American Mathematical Society, 2001.CrossRefzbMATHGoogle Scholar
  18. [18]
    Rudin, L. I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena Vol. 60, Nos. 1–4, 259–268, 1992.MathSciNetCrossRefzbMATHGoogle Scholar
  19. [19]
    Yin, W.; Goldfarb, D.; Osher, S. Image cartoontexture decomposition and feature selection using the total variation regularized L 1 functional. In: Variational, Geometric, and Level Set Methods in Computer Vision. Paragios, N.; Faugeras, O.; Chan, T.; Schnörr, C. Eds. Springer Berlin Heidelberg, 73–84, 2005.CrossRefGoogle Scholar
  20. [20]
    Durand, F.; Dorsey, J. Fast bilateral filtering for the display of high-dynamic-range images. ACM Transactions on Graphics Vol. 21, No. 3, 257–266, 2002.CrossRefGoogle Scholar
  21. [21]
    Fattal, R.; Agrawala, M.; Rusinkiewicz, S. Multiscale shape and detail enhancement from multi-light image collections. ACM Transactions on Graphics Vol. 26, No. 3, Article No. 51, 2007.Google Scholar
  22. [22]
    Paris, S.; Durand, F. A fast approximation of the bilateral filter using a signal processing approach. In: Computer Vision–ECCV 2006. Leonardis, A.; Bischof, H.; Pinz, A. Eds. Springer Berlin Heidelberg, 568–580, 2006.CrossRefGoogle Scholar
  23. [23]
    Kass, M.; Solomon, J. Smoothed local histogram filters. ACM Transactions on Graphics Vol. 29, No. 4, Article No. 100, 2010.Google Scholar
  24. [24]
    Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Transactions on Graphics Vol. 27, No. 3, Article No. 67, 2008.Google Scholar
  25. [25]
    Subr, K.; Soler, C.; Durand, F. Edge-preserving multiscale image decomposition based on local extrema. ACM Transactions on Graphics Vol. 28, No. 5, Article No. 147, 2009.Google Scholar
  26. [26]
    Xu, L.; Lu, C.; Xu, Y.; Jia, J. Image smoothing via L0 gradient minimization. ACM Transactions on Graphics Vol. 30, No. 6, Article No. 174, 2011.Google Scholar
  27. [27]
    Manjunath, B. S.; Ma, W.-Y. Texture features for browsing and retrieval of image data. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 18, No. 8, 837–842, 1996.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2016

Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from To submit a manuscript, please go to

Authors and Affiliations

  1. 1.The Chinese University of Hong KongHong KongChina
  2. 2.Shenzhen Research Institutethe Chinese University of Hong KongShenzhenChina

Personalised recommendations