Advertisement

PIVTONS: Pose Invariant Virtual Try-On Shoe with Conditional Image Completion

  • Chao-Te ChouEmail author
  • Cheng-Han Lee
  • Kaipeng Zhang
  • Hu-Cheng Lee
  • Winston H. Hsu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11366)

Abstract

Virtual try-on – synthesizing an almost-realistic image for dressing a target fashion item provided the source human photo – has growing needs due to the prevalence of e-commerce and the development of deep learning technologies. However, existing deep learning virtual try-on methods focus on the clothing replacement due to the lack of dataset and cope with flat body segments with frontal poses providing the front view of the target fashion item. In this paper, we present the pose invariant virtual try-on shoe (PIVTONS) to cope with the task of virtual try-on shoe. We collect the paired feet and shoe virtual-try on dataset, Zalando-shoes, containing 14,062 shoes among the 11 categories of shoes. The shoe image only contains a single view of the shoes but the try-on result should show other views of the shoes depending on the original feet pose. We formulate this task as an automatic and labor-free image completion task and design an end-to-end neural networks composing of feature point detector. Through the numerous experiments and ablation studies, we demonstrate the performance of the proposed framework and investigate the parameterizing factors for optimizing the challenging problem.

Keywords

Virtual try-on Generative model 

Notes

Acknowledgement

This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 107-2634-F-002-007 and 105-2221-E-002-182-MY2. We also benefit from the grants from NVIDIA and the NVIDIA DGX-1 AI Supercomputer. We also appreciate the research grants from Microsoft Research Asia.

References

  1. 1.
    Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: CVPR (2017)Google Scholar
  2. 2.
    Gong, K., Liang, X., Zhang, D., Shen, X., Lin, L.: Look into person: self-supervised structure-sensitive learning and a new benchmark for human parsing. In: CVPR (2017)Google Scholar
  3. 3.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)Google Scholar
  4. 4.
    Han, X., Wu, Z., Wu, Z., Yu, R., Davis, L.S.: VITON: an image-based virtual try-on network. In: CVPR (2018)Google Scholar
  5. 5.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)Google Scholar
  6. 6.
    Jetchev, N., Bergmann, U.: The conditional analogy GAN: swapping fashion articles on people images. In: ICCV (2017)Google Scholar
  7. 7.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  8. 8.
    Köhler, R., Schuler, C., Schölkopf, B., Harmeling, S.: Mask-specific inpainting with deep neural networks. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 523–534. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-11752-2_43CrossRefGoogle Scholar
  9. 9.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR (2017)Google Scholar
  10. 10.
    Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: CVPR (2017)Google Scholar
  11. 11.
    Liu, Z., Luo, P., Qiu, S., Wang, X., Tang, X.: DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In: CVPR (2016)Google Scholar
  12. 12.
    Ma, L., Jia, X., Sun, Q., Schiele, B., Tuytelaars, T., Van Gool, L.: Pose guided person image generation. In: NIPS (2017)Google Scholar
  13. 13.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv (2014)Google Scholar
  14. 14.
    Odena, A., Dumoulin, V., Olah, C.: Deconvolution and checkerboard artifacts. Distill 1, e3 (2016).  https://doi.org/10.23915/distill.00003. http://distill.pub/2016/deconv-checkerboardCrossRefGoogle Scholar
  15. 15.
    Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: CVPR (2016)Google Scholar
  16. 16.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv (2015)Google Scholar
  17. 17.
    Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. arXiv (2016)Google Scholar
  18. 18.
    Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: NIPS (2016)Google Scholar
  19. 19.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP 13, 600–612 (2004)Google Scholar
  20. 20.
    Wei, S.E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: CVPR (2016)Google Scholar
  21. 21.
    Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: ICCV (2017)Google Scholar
  22. 22.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)Google Scholar
  23. 23.
    Zhu, S., Urtasun, R., Fidler, S., Lin, D., Change Loy, C.: Be your own prada: fashion synthesis with structural coherence. In: ICCV (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Chao-Te Chou
    • 1
    Email author
  • Cheng-Han Lee
    • 1
  • Kaipeng Zhang
    • 1
  • Hu-Cheng Lee
    • 1
  • Winston H. Hsu
    • 1
  1. 1.National Taiwan UniversityTaipeiTaiwan

Personalised recommendations