Skip to main content
Log in

Modeling and realization of image-based garment texture transfer

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

We present an automated framework founded on texture transfer, facilitating the substitution of textures in garment images with specified ones for applications in garment design and online presentation. In contrast to previous methodologies, our approach achieves seamless texture transfer from a single image while preserving fold variations and shadow intricacies. Given a garment image and a texture image, we initially extract pixel-aligned features from the garment image and construct a parametric model of the garment through spatial sampling. The mesh structure is subsequently validated employing the Marching Cube algorithm. We then enhance the model quality through mesh optimization using a variational approach. Finally, we optimize parallax mapping to execute the texture transfer from the source texture image. Experimental results convincingly demonstrate the effectiveness of our method in achieving texture transfer in garment images while maintaining the fidelity of folds and shadows.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are not publicly available due to privacy but are available from the corresponding author on reasonable request.

References

  1. Guan, X., Luo, L., Li, H., Wang, H., Liu, C., Wang, S., Jin, X.: Automatic embroidery texture synthesis for garment design and online display. Vis. Comput. 37, 2553–2565 (2021)

    Article  Google Scholar 

  2. Güler, R.A., Neverova, N., Kokkinos, I.: Densepose: Dense human pose estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7297–7306 (2018)

  3. Ianina, A., Sarafianos, N., Xu, Y., Rocco, I., Tung, T.: BodyMap: learning full-body dense correspondence map. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13286–13295 (2022)

  4. AlBahar, B., Lu, J., Yang, J., Shu, Z., Shechtman, E., Huang, J.-B.: Pose with style: detail-preserving pose-guided image synthesis with conditional stylegan. ACM Trans. Graph. 40, 1–11 (2021)

    Article  Google Scholar 

  5. Wang, T.Y., Ceylan, D., Singh, K.K., Mitra, N.J.: Dance in the wild: Monocular human animation with neural dynamic appearance synthesis. In: 2021 International Conference on 3D Vision (3DV). pp. 268–277 (2021)

  6. Meng, Y., Mok, P.Y., Jin, X.: Computer aided clothing pattern design with 3D editing and pattern alteration. Comput. Des. 44, 721–734 (2012)

    Google Scholar 

  7. Feng, W.-W., Yu, Y., Kim, B.-U.: A deformation transformer for real-time cloth animation. ACM Trans. Graph. 29, 1–9 (2010)

    Google Scholar 

  8. Kim, T.-Y., Vendrovsky, E.: DrivenShape: a data-driven approach for shape deformation. In: Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. pp. 49–55 (2008)

  9. Wang, H., Hecht, F., Ramamoorthi, R., O’Brien, J.F.: Example-based wrinkle synthesis for clothing animation. In: ACM SIGGRAPH 2010 papers. pp. 1–8 (2010)

  10. Meng, Y., Wang, C.C.L., Jin, X.: Flexible shape control for automatic resizing of apparel products. Comput. Des. 44, 68–76 (2012)

    Google Scholar 

  11. Zurdo, J.S., Brito, J.P., Otaduy, M.A.: Animating wrinkles by example on non-skinned cloth. IEEE Trans. Vis. Comput. Graph. 19, 149–158 (2012)

    Article  Google Scholar 

  12. Zhou, B., Chen, X., Fu, Q., Guo, K., Tan, P.: Garment modeling from a single image. Comput. Graphics Forum. 32, 85–91 (2013)

    Article  Google Scholar 

  13. Bartle, A., Sheffer, A., Kim, V.G., Kaufman, D.M., Vining, N., Berthouzoz, F.: Physics-driven pattern adjustment for direct 3D garment editing. ACM Trans. Graph. 35, 50–51 (2016)

    Article  Google Scholar 

  14. Zhu, Y., Peng, Y., Boodaghian Asl, A.: Dual adaptive adjustment for customized garment pattern. Sci. Program. 2019 (2019)

  15. Pons-Moll, G., Pujades, S., Hu, S., Black, M.J.: ClothCap: Seamless 4D clothing capture and retargeting. ACM Trans. Graph. 36, 1–15 (2017)

    Article  Google Scholar 

  16. Jiang, L., Ye, J., Sun, L., Li, J.: Transferring and fitting fixed-sized garments onto bodies of various dimensions and postures. Comput. Des. 106, 30–42 (2019)

    Google Scholar 

  17. Deschaintre, V., Guerrero-Viu, J., Gutierrez, D., Boubekeur, T., Masia, B.: The visual language of fabrics. arXiv Prepr. arXiv2307.13681 (2023)

  18. Rodriguez-Pardo, C., Garces, E.: Seamlessgan: self-supervised synthesis of tileable texture maps. IEEE Trans. Vis. Comput. Graph. 29, 2914–2925 (2022)

    Article  Google Scholar 

  19. Rodriguez-Pardo, C., Prieto-Martin, M., Casas, D., Garces, E.: How will it drape like? Capturing fabric mechanics from depth images. In: Computer Graphics Forum. pp. 149–160 (2023)

  20. Wang, T.Y., Ceylan, D., Popovic, J., Mitra, N.J.: Learning a shared shape space for multimodal garment design. arXiv Prepr. arXiv1806.11335. (2018)

  21. Zhang, M., Wang, T., Ceylan, D., Mitra, N.J.: Deep detail enhancement for any garment. In: Computer Graphics Forum. pp. 399–411 (2021)

  22. Zhang, M., Ceylan, D., Wang, T., Mitra, N.J.: Dynamic Neural Garments (2021)

  23. Hu, X., Zheng, C., Huang, J., Luo, R., Liu, J., Peng, T.: Cloth texture preserving image-based 3D virtual try-on. Vis. Comput. 39, 1–11 (2023)

    Article  Google Scholar 

  24. Alldieck, T., Magnor, M., Bhatnagar, B.L., Theobalt, C., Pons-Moll, G.: Learning to reconstruct people in clothing from a single RGB camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1175–1186 (2019)

  25. Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M.J.: Learning to dress 3d people in generative clothing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6469–6478 (2020)

  26. Bertiche, H., Madadi, M., Escalera, S.: CLOTH3D: Clothed 3d Humans. In: European Conference on Computer Vision. pp. 344–359 (2020)

  27. Bhatnagar, B.L., Tiwari, G., Theobalt, C., Pons-Moll, G.: Multi-garment net: learning to dress 3d people from images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 5420–5430 (2019)

  28. Kwon, Y., Kim, D., Ceylan, D., Fuchs, H.: Neural human performer: Learning generalizable radiance fields for human performance rendering. Adv. Neural. Inf. Process. Syst. 34, 24741–24752 (2021)

    Google Scholar 

  29. I\cs\ik, M., Rünz, M., Georgopoulos, M., Khakhulin, T., Starck, J., Agapito, L., Nießner, M.: Humanrf: high-fidelity neural radiance fields for humans in motion. arXiv Prepr. arXiv2305.06356. (2023)

  30. Bertiche, H., Mitra, N.J., Kulkarni, K., Huang, C.-H.P., Wang, T.Y., Madadi, M., Escalera, S., Ceylan, D.: Blowing in the wind: CycleNet for human cinemagraphs from still images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 459–468 (2023)

  31. Siddiqui, Y., Thies, J., Ma, F., Shan, Q., Nießner, M., Dai, A.: Texturify: Generating textures on 3d shape surfaces. In: European Conference on Computer Vision. pp. 72–88 (2022)

  32. Jafarian, Y., Wang, T.Y., Ceylan, D., Yang, J., Carr, N., Zhou, Y., Park, H.S.: Normal-guided Garment UV Prediction for Human Re-texturing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4627–4636 (2023)

  33. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: Computer Vision--ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VIII 14. pp. 483–499 (2016)

  34. Saito, S., Huang, Z., Natsume, R., Morishima, S., Kanazawa, A., Li, H.: Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 2304–2314 (2019)

  35. Field, D.A.: Qualitative measures for initial meshes. Int. J. Numer. Methods Eng. 47, 887–906 (2000)

    Article  MATH  Google Scholar 

  36. Wang, B., Zheng, H., Liang, X., Chen, Y., Lin, L., Yang, M.: Toward characteristic-preserving image-based virtual try-on network. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 589–604 (2018)

  37. Han, X., Hu, X., Huang, W., Scott, M.R.: Clothflow: A flow-based model for clothed person generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 10471–10480 (2019)

  38. Yang, H., Zhang, R., Guo, X., Liu, W., Zuo, W., Luo, P.: Towards photo-realistic virtual try-on by adaptively generating-preserving image content. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7850–7859 (2020)

  39. Chopra, A., Jain, R., Hemani, M., Krishnamurthy, B.: Zflow: gated appearance flow-based virtual try-on with 3d priors. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 5433–5442 (2021)

  40. Gong, K., Liang, X., Zhang, D., Shen, X., Lin, L.: Look into person: self-supervised structure-sensitive learning and a new benchmark for human parsing. In: Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017. 2017-Janua, 6757–6765 (2017). https://doi.org/10.1109/CVPR.2017.715

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (No. 61976105 and No. 62202202) and Postgraduate Research & Practice Innovation Program of Jiangsu Province (No. KYCX22_2342).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruru Pan.

Ethics declarations

Conflict of interests

The authors declared no potential conflicts of interests with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, W., Song, B., Zhang, N. et al. Modeling and realization of image-based garment texture transfer. Vis Comput (2023). https://doi.org/10.1007/s00371-023-03153-w

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-023-03153-w

Keywords

Navigation