Skip to main content

Neural Re-rendering of Humans from a Single Image

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12356))

Abstract

Human re-rendering from a single image is a starkly underconstrained problem, and state-of-the-art algorithms often exhibit undesired artefacts, such as over-smoothing, unrealistic distortions of the body parts and garments, or implausible changes of the texture. To address these challenges, we propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint, given one input image. Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image and easily reposed. Instead of a colour-based UV texture map, our approach further employs a learned high-dimensional UV feature map to encode appearance. This rich implicit representation captures detailed appearance variation across poses, viewpoints, person identities and clothing styles better than learned colour texture maps. The body model with the rendered feature maps is fed through a neural image-translation network that creates the final rendered colour image. The above components are combined in an end-to-end-trained neural network architecture that takes as input a source person image, and images of the parametric body model in the source pose and desired target pose. Experimental evaluation demonstrates that our approach produces higher quality single-image re-rendering results than existing methods.

Project webpage: http://gvv.mpi-inf.mpg.de/projects/NHRR/.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Agarwal, S., Furukawa, Y., Snavely, N., Simon, I., Curless, B., Seitz, S.M., Szeliski, R.: Building Rome in a day. Commun. ACM 54(10), 105–112 (2011)

    Article  Google Scholar 

  2. Alldieck, T., Pons-Moll, G., Theobalt, C., Magnor, M.: Tex2shape: detailed full human body geometry from a single image. In: International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  3. Balakrishnan, G., Zhao, A., Dalca, A.V., Durand, F., Guttag, J.V.: Synthesizing images of humans in unseen poses. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  4. Buehler, C., Bosse, M., McMillan, L., Gortler, S.J., Cohen, M.F.: Unstructured lumigraph rendering. In: SIGGRAPH (2001)

    Google Scholar 

  5. Carceroni, R.L., Kutulakos, K.N.: Multi-view scene capture by surfel sampling: from video streams to non-rigid 3d motion, shape and reflectance. Int. J. Comput. Vision (IJCV) 49(2), 175–214 (2002)

    Article  Google Scholar 

  6. Chan, C., Ginosar, S., Zhou, T., Efros, A.A.: Everybody dance now. In: International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  7. Chaurasia, G., Duchêne, S., Sorkine-Hornung, O., Drettakis, G.: Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Graphics 32, 1–13 (2013)

    Article  Google Scholar 

  8. Debevec, P., Yu, Y., Borshukov, G.: Efficient view-dependent image-based rendering with projective texture-mapping. In: Eurographics Workshop on Rendering (1998)

    Google Scholar 

  9. Dou, M., et al.: Fusion4d: real-time performance capture of challenging scenes. ACM Trans. Graph. 35(4), 1–13 (2016)

    Article  Google Scholar 

  10. Esser, P., Sutter, E., Ommer, B.: A variational u-net for conditional appearance and shape generation. In: Computer Vision and Pattern Recognition (CVPR), pp. 8857–8866 (2018)

    Google Scholar 

  11. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: SIGGRAPH, pp. 43–54 (1996)

    Google Scholar 

  12. Grigor’ev, A.K., Sevastopolsky, A., Vakhitov, A., Lempitsky, V.S.: Coordinate-based texture inpainting for pose-guided human image generation. In: Computer Vision and Pattern Recognition (CVPR), pp. 12127–12136 (2019)

    Google Scholar 

  13. Guo, K., Xu, F., Yu, T., Liu, X., Dai, Q., Liu, Y.: Real-time geometry, albedo, and motion reconstruction using a single RGB-D camera. ACM Trans. Graph. 36(4) (2017)

    Google Scholar 

  14. Han, X., Hu, X., Huang, W., Scott, M.R.: Clothflow: a flow-based model for clothed person generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019

    Google Scholar 

  15. Huang, Z.: Deep volumetric video from very sparse multi-view performance capture. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 351–369. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_21

    Chapter  Google Scholar 

  16. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  17. Kanazawa, A., Black, M.J., Jacobs, D.W., Malik, J.: End-to-end recovery of human shape and pose. In: Computer Vision and Pattern Regognition (CVPR) (2018)

    Google Scholar 

  18. Kim, H., et al.: Neural style-preserving visual dubbing. ACM Trans. Graphics (TOG) 38(6), 178:1–178:13 (2019)

    Google Scholar 

  19. Kim, H., et al.: Deep videoportraits. ACM Trans. Graphics (TOG) 37 (2018)

    Google Scholar 

  20. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR) (2015)

    Google Scholar 

  21. Lazova, V., Insafutdinov, E., Pons-Moll, G.: 360-degree textures of people in clothing from a single image. In: International Conference on 3D Vision (3DV), pp. 643–653 (2019)

    Google Scholar 

  22. Levoy, M., Hanrahan, P.: Light field rendering. In: SIGGRAPH, p. 31–42 (1996)

    Google Scholar 

  23. Liu, L., et al.: Neural rendering and reenactment of human actor videos. ACM Trans. Graphics (TOG) (2019)

    Google Scholar 

  24. Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., Song, L.: Sphereface: deep hypersphere embedding for face recognition. In: Computer Vision and Pattern Recognition (CVPR), pp. 212–220 (2017)

    Google Scholar 

  25. Liu, W., Piao, Z., Jie, M., Luo, W., Ma, L., Gao, S.: Liquid warping GAN: a unified framework for human motion imitation, appearance transfer and novel view synthesis. In: International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  26. Liu, Y., Dai, Q., Xu, W.: A point-cloud-based multiview stereo algorithm for free-viewpoint video. IEEE Trans. Vis. Comput. Graphics (TVCG) 16(3), 407–418 (2010)

    Article  Google Scholar 

  27. Liu, Z., Luo, P., Qiu, S., Wang, X., Tang, X.: Deepfashion: powering robust clothes recognition and retrieval with rich annotations. In: Computer Vision and Pattern Recognition (CVPR), pp. 1096–1104 (2016)

    Google Scholar 

  28. Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph. (SIGGRAPH) 38(4) (2019)

    Google Scholar 

  29. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: a skinned multi-person linear model. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34(6), 248:1–248:16 (2015)

    Google Scholar 

  30. Ma, L., Sun, Q., Georgoulis, S., van Gool, L., Schiele, B., Fritz, M.: Disentangled person image generation. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  31. Martin Brualla, R., et al.: Lookingood: enhancing performance capture with real-time neural re-rendering. ACM Trans. Graphics (TOG) 37 (2018)

    Google Scholar 

  32. Matsuyama, T., Xiaojun Wu, Takai, T., Wada, T.: Real-time dynamic 3-D object shape reconstruction and high-fidelity texture mapping for 3-D video. IEEE Trans. Circuits Syst. Video Technol. 14(3), 357–369 (2004)

    Google Scholar 

  33. Neverova, N., Alp Güler, R., Kokkinos, I.: Dense pose transfer. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 128–143. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_8

    Chapter  Google Scholar 

  34. Orts-Escolano, S., et al.: Holoportation: virtual 3D teleportation in real-time. In: Annual Symposium on User Interface Software and Technology, pp. 741–754 (2016)

    Google Scholar 

  35. Pandey, R., et al.: Volumetric capture of humans with a single RGBD camera via semi-parametric learning. In: Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  36. Pfister, H., Zwicker, M., van Baar, J., Gross, M.: Surfels: surface elements as rendering primitives. In: SIGGRAPH, pp. 335–342 (2000)

    Google Scholar 

  37. Gueler, R.A., Neverova, N., Kokkinos, I.: Densepose: dense human pose estimation in the wild. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  38. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  39. Saito, S., Huang, Z., Natsume, R., Morishima, S., Kanazawa, A., Li, H.: PIFU: pixel-aligned implicit function for high-resolution clothed human digitization. In: International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  40. Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Computer Vision and Pattern Recognition (CVPR), pp. 4104–4113 (2016)

    Google Scholar 

  41. Shade, J., Gortler, S., He, L.W., Szeliski, R.: Layered depth images. In: SIGGRAPH, pp. 231–242 (1998)

    Google Scholar 

  42. Shysheya, A., et al.: Textured neural avatars. In: Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  43. Siarohin, A., Lathuilière, S., Sangineto, E., Sebe, N.: Appearance and pose-conditioned human image generation using deformable GANs. Trans. Pattern Anal. Mach. Intell. (TPAMI) (2019)

    Google Scholar 

  44. Siarohin, A., Lathuilière, S., Tulyakov, S., Ricci, E., Sebe, N.: Animating arbitrary objects via deep motion transfer. In: Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  45. Siarohin, A., Lathuilière, S., Tulyakov, S., Ricci, E., Sebe, N.: First order motion model for image animation. In: Conference on Neural Information Processing Systems (NeurIPS) (2019)

    Google Scholar 

  46. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  47. Sitzmann, V., Thies, J., Heide, F., Nießner, M., Wetzstein, G., Zollhöfer, M.: Deepvoxels: Learning persistent 3D feature embeddings. In: Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  48. Sitzmann, V., Zollhöfer, M., Wetzstein, G.: Scene representation networks: continuous 3D-structure-aware neural scene representations. In: Advances in Neural Information Processing Systems (NeurIPS) (2019)

    Google Scholar 

  49. Tao, Y., et al.: Doublefusion: real-time capture of human performance with inner body shape from a depth sensor. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  50. Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: image synthesis using neural textures. ACM Trans. Graphics (TOG) 38 (2019)

    Google Scholar 

  51. Thies, J., Zollhöfer, M., Theobalt, C., Stamminger, M., Nießner, M.: Image-guided neural object rendering. In: International Conference on Learning Representations (ICLR) (2020)

    Google Scholar 

  52. Tung, T., Nobuhara, S., Matsuyama, T.: Complete multi-view reconstruction of dynamic scenes from probabilistic fusion of narrow and wide baseline stereo. In: International Conference on Computer Vision (ICCV). pp. 1709–1716 (2009)

    Google Scholar 

  53. Varol, G., et al.: Learning from synthetic humans. In: Computer Vision and Pattern Recognition (CVPR) (2017)

    Google Scholar 

  54. Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional GANs. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  55. Waschbüsch, M., Würmlin, S., Cotting, D., Sadlo, F., Gross, M.: Scalable 3D video of dynamic scenes. Visual Comput. 21(8), 629–638 (2005)

    Article  Google Scholar 

  56. Xu, Z., Bi, S., Sunkavalli, K., Hadap, S., Su, H., Ramamoorthi, R.: Deep view synthesis from sparse photometric images. ACM Trans. Graph. 38(4), 76:1–76:13 (2019)

    Google Scholar 

  57. Yu, T., et al.: Bodyfusion: real-time capture of human motion and surface geometry using a single depth camera. In: International Conference on Computer Vision (ICCV), pp. 910–919 (2017)

    Google Scholar 

  58. Yu, T., et al: Simulcap: single-view human performance capture with cloth simulation. In: Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  59. Zablotskaia, P., Siarohin, A., Sigal, L., Zhao, B.: DwNet: dense warp-based network for pose-guided human video generation. In: British Machine Vision Conference (BMVC) (2019)

    Google Scholar 

  60. Zhang, L., Curless, B., Seitz, S.M.: Spacetime stereo: shape recovery for dynamic scenes. In: Computer Vision and Pattern Recognition (CVPR) (2003)

    Google Scholar 

  61. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  62. Zhao, B., Wu, X., Cheng, Z.Q., Liu, H., Jie, Z., Feng, J.: Multi-view image generation from a single-view. In: ACM International Conference on Multimedia, pp. 383–391 (2018)

    Google Scholar 

  63. Zhou, Y., Wang, Z., Fang, C., Bui, T., Berg, T.L.: Dance dance generation: motion transfer for internet videos. In: International Conference on Computer Vision Workshops (ICCVW) (2019)

    Google Scholar 

  64. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  65. Zhu, H., Su, H., Wang, P., Cao, X., Yang, R.: View extrapolation of human body from a single image. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  66. Zhu, J.Y., et al.: Visual object networks: image generation with disentangled 3D representations. In: Conference on Neural Information Processing Systems (NeurIPS), pp. 118–129 (2018)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the ERC Consolidator Grant 4DReply (770784).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kripasindhu Sarkar .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 2 (pdf 6003 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sarkar, K., Mehta, D., Xu, W., Golyanik, V., Theobalt, C. (2020). Neural Re-rendering of Humans from a Single Image. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12356. Springer, Cham. https://doi.org/10.1007/978-3-030-58621-8_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58621-8_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58620-1

  • Online ISBN: 978-3-030-58621-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics