Advertisement

Real-Time Hair Rendering Using Sequential Adversarial Networks

  • Lingyu Wei
  • Liwen Hu
  • Vladimir Kim
  • Ersin Yumer
  • Hao LiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11208)

Abstract

We present an adversarial network for rendering photorealistic hair as an alternative to conventional computer graphics pipelines. Our deep learning approach does not require low-level parameter tuning nor ad-hoc asset design. Our method simply takes a strand-based 3D hair model as input and provides intuitive user-control for color and lighting through reference images. To handle the diversity of hairstyles and its appearance complexity, we disentangle hair structure, color, and illumination properties using a sequential GAN architecture and a semi-supervised training approach. We also introduce an intermediate edge activation map to orientation field conversion step to ensure a successful CG-to-photoreal transition, while preserving the hair structures of the original input data. As we only require a feed-forward pass through the network, our rendering performs in real-time. We demonstrate the synthesis of photorealistic hair images on a wide range of intricate hairstyles and compare our technique with state-of-the-art hair rendering methods.

Keywords

Hair rendering GAN 

Notes

Acknowledgments

This work was supported in part by the ONR YIP grant N00014-17-S-FO14, the CONIX Research Center, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, the Andrew and Erna Viterbi Early Career Chair, the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005, and Adobe. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. We thank Radomír Měch for insightful discussions.

Supplementary material

Supplementary material 1 (mov 58078 KB)

474208_1_En_7_MOESM2_ESM.pdf (1.9 mb)
Supplementary material 2 (pdf 1979 KB)

References

  1. 1.
    Azadi, S., Fisher, M., Kim, V.G., Wang, Z., Shechtman, E., Darrell, T.: Multi-content GAN for few-shot font style transfer. CVPR (2018)Google Scholar
  2. 2.
    Cao, C., Wu, H., Weng, Y., Shao, T., Zhou, K.: Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. 35(4), 126:1–126:12 (2016).  https://doi.org/10.1145/2897824.2925873CrossRefGoogle Scholar
  3. 3.
    Chai, M., Luo, L., Sunkavalli, K., Carr, N., Hadap, S., Zhou, K.: High-quality hair modeling from a single portrait photo. ACM Trans. Graph. (Proceedings SIGGRAPH Asia) 34(6), November 2015CrossRefGoogle Scholar
  4. 4.
    Chai, M., Shao, T., Wu, H., Weng, Y., Zhou, K.: AutoHair: fully automatic hair modeling from a single image. ACM Trans. Graph. (TOG) 35(4), 116 (2016)CrossRefGoogle Scholar
  5. 5.
    Chai, M., Wang, L., Weng, Y., Jin, X., Zhou, K.: Dynamic hair manipulation in images and videos. ACM Trans. Graph. 32(4), 75:1–75:8 (2013).  https://doi.org/10.1145/2461912.2461990CrossRefzbMATHGoogle Scholar
  6. 6.
    Chai, M., Wang, L., Weng, Y., Yu, Y., Guo, B., Zhou, K.: Single-view hair modeling for portrait manipulation. ACM Trans. Graph. (TOG) 31(4), 116 (2012)CrossRefGoogle Scholar
  7. 7.
    Chang, H., Lu, J., Yu, F., Finkelstein, A.: Makeupgan: makeup transfer via cycle-consistent adversarial networks. CVPR (2018)Google Scholar
  8. 8.
    d’Eon, E., Francois, G., Hill, M., Letteri, J., Aubry, J.M.: An energy-conserving hair reflectance model. In: Proceedings of the Twenty-Second Eurographics Conference on Rendering, EGSR 2011, pp. 1181–1187. Eurographics Association, Aire-la-Ville (2011).  https://doi.org/10.1111/j.1467-8659.2011.01976.xCrossRefGoogle Scholar
  9. 9.
    d’Eon, E., Marschner, S., Hanika, J.: Importance sampling for physically-based hair fiber models. In: SIGGRAPH Asia 2013 Technical Briefs, SA 2013, pp. 25:1–25:4. ACM, New York (2013).  https://doi.org/10.1145/2542355.2542386
  10. 10.
    Donahue, J., Krähenbühl, P., Darrell, T.: Adversarial feature learning. CoRR abs/1605.09782 (2016). http://arxiv.org/abs/1605.09782
  11. 11.
    Dumoulin, V., et al.: Adversarially learned inference. CoRR abs/1606.00704 (2016)Google Scholar
  12. 12.
    Goodfellow, I.J., et al.: Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, NIPS 2014, vol. 2, pp. 2672–2680. MIT Press, Cambridge (2014). http://dl.acm.org/citation.cfm?id=2969033.2969125
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015). http://arxiv.org/abs/1512.03385
  14. 14.
  15. 15.
    Hu, L., Ma, C., Luo, L., Li, H.: Robust hair capture using simulated examples. ACM Trans. Graph. (Proceedings SIGGRAPH) 33(4) (2014)CrossRefGoogle Scholar
  16. 16.
    Hu, L., Ma, C., Luo, L., Li, H.: Single-view hair modeling using a hairstyle database. ACM Trans. Graph. (Proceedings SIGGRAPH) 34(4) (2015)Google Scholar
  17. 17.
    Hu, L., Ma, C., Luo, L., Wei, L.Y., Li, H.: Capturing braided hairstyles. ACM Trans. Graph. 33(6), 225:1–225:9 (2014)Google Scholar
  18. 18.
    Hu, L.: Avatar digitization from a single image for real-time rendering. ACM Trans. Graph. 36(6), 195:1–195:14 (2017).  https://doi.org/10.1145/3130800.31310887CrossRefGoogle Scholar
  19. 19.
    Huynh, L., et al.: Photorealistic facial texture inference using deep neural networks. In: Computer Vision and Pattern Recognition (CVPR). IEEE (2018)Google Scholar
  20. 20.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. CVPR (2016)Google Scholar
  21. 21.
    Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. CoRR abs/1611.07004 (2016). http://arxiv.org/abs/1611.07004
  22. 22.
    Kajiya, J.T., Kay, T.L.: Rendering fur with three dimensional textures. SIGGRAPH Comput. Graph. 23(3), 271–280 (1989).  https://doi.org/10.1145/74334.74361CrossRefGoogle Scholar
  23. 23.
    Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=Hk99zCeAb
  24. 24.
    Kim, T.Y., Neumann, U.: Interactive multiresolution hair modeling and editing. ACM Trans. Graph. 21(3), 620–629 (2002).  https://doi.org/10.1145/566654.566627CrossRefGoogle Scholar
  25. 25.
    Lee, D.W., Ko, H.S.: Natural hairstyle modeling and animation. Graph. Models 63(2), 67–85 (2001).  https://doi.org/10.1006/gmod.2001.0547CrossRefzbMATHGoogle Scholar
  26. 26.
    Lin, C., Lucey, S., Yumer, E., Wang, O., Shechtman, E.: ST-GAN: spatial transformer generative adversarial networks for image compositing. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018 (2018)Google Scholar
  27. 27.
    Liu, G., Ceylan, D., Yumer, E., Yang, J., Lien, J.M.: Material editing using a physically based rendering network. In: ICCV (2017)Google Scholar
  28. 28.
    Luo, L., Li, H., Paris, S., Weise, T., Pauly, M., Rusinkiewicz, S.: Multi-view hair capture using orientation fields. In: Computer Vision and Pattern Recognition (CVPR), June 2012Google Scholar
  29. 29.
    Luo, L., Li, H., Rusinkiewicz, S.: Structure-aware hair capture. ACM Trans. Graph. (Proceeding SIGGRAPH) 32(4), July 2013CrossRefGoogle Scholar
  30. 30.
    Luo, L., Zhang, C., Zhang, Z., Rusinkiewicz, S.: Wide-baseline hair capture using strand-based refinement. In: Computer Vision and Pattern Recognition (CVPR), June 2013Google Scholar
  31. 31.
    Marschner, S.R., Jensen, H.W., Cammarano, M., Worley, S., Hanrahan, P.: Light scattering from human hair fibers. ACM Trans. Graph. 22(3), 780–791 (2003).  https://doi.org/10.1145/882262.882345CrossRefGoogle Scholar
  32. 32.
    Nalbach, O., Arabadzhiyska, E., Mehta, D., Seidel, H.P., Ritschel, T.: Deep shading: convolutional neural networks for screen space shading. Comput. Graph. Forum 36, 65–78 (2017)CrossRefGoogle Scholar
  33. 33.
    Olszewski, K., et al.: Realistic dynamic facial textures from a single image using GANs. In: ICCV (2017)Google Scholar
  34. 34.
    Paris, S., Briceño, H.M., Sillion, F.X.: Capture of hair geometry from multiple images. ACM Trans. Graph. (TOG) 23, 712–719 (2004)CrossRefGoogle Scholar
  35. 35.
    Paris, S., et al.: Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. (TOG) 27, 30 (2008)CrossRefGoogle Scholar
  36. 36.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: ICLR (2016)Google Scholar
  37. 37.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. CoRR abs/1505.04597 (2015). http://arxiv.org/abs/1505.04597Google Scholar
  38. 38.
    Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Scribbler: controlling deep image synthesis with sketch and color. In: Computer Vision and Pattern Recognition, CVPR (2017)Google Scholar
  39. 39.
    Shu, Z., Yumer, E., Hadap, S., Sunkavalli, K., Shechtman, E., Samaras, D.: Neural face editing with intrinsic image disentangling. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017Google Scholar
  40. 40.
    Sintorn, E., Assarsson, U.: Hair self shadowing and transparency depth ordering using occupancy maps. In: Proceedings of the 2009 Symposium on Interactive 3D Graphics and Games, I3D 2009, pp. 67–74. ACM, New York (2009).  https://doi.org/10.1145/1507149.1507160
  41. 41.
    Xian, W., et al.: TextureGAN: controlling deep image synthesis with texture patches. In: CVPR (2018)Google Scholar
  42. 42.
    Yan, L.Q., Jensen, H.W., Ramamoorthi, R.: An efficient and practical near and far field fur reflectance model. ACM Trans. Graph. (Proceedings of SIGGRAPH 2017) 36(4) (2017)Google Scholar
  43. 43.
    Yan, L.Q., Sun, W., Jensen, H.W., Ramamoorthi, R.: A BSSRDF model for efficient rendering of fur with global illumination. ACM Trans. Graph. (Proceedings of SIGGRAPH Asia 2017) (2017)Google Scholar
  44. 44.
    Yan, L.Q., Tseng, C.W., Jensen, H.W., Ramamoorthi, R.: Physically-accurate fur reflectance: modeling, measurement and rendering. ACM Trans. Graph. (Proceedings of SIGGRAPH Asia 2015) 34(6) (2015)CrossRefGoogle Scholar
  45. 45.
    Yu, X., Yang, J.C., Hensley, J., Harada, T., Yu, J.: A framework for rendering complex scattering effects on hair. In: Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, I3D 2012, pp. 111–118. ACM, New York (2012).  https://doi.org/10.1145/2159616.2159635
  46. 46.
    Yuksel, C., Schaefer, S., Keyser, J.: Hair meshes. ACM Trans. Graph. (Proceedings of SIGGRAPH Asia 2009) 28(5), 166:1–166:7 (2009).  https://doi.org/10.1145/1661412.1618512
  47. 47.
    Zhang, M., Chai, M., Wu, H., Yang, H., Zhou, K.: A data-driven approach to four-view image-based hair modeling. ACM Trans. Graph. (TOG) 36(4), 156 (2017)Google Scholar
  48. 48.
    Zhang, R., Isola, P., Efros, A.A.: Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In: CVPR (2017)Google Scholar
  49. 49.
    Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6230–6239, July 2017.  https://doi.org/10.1109/CVPR.2017.660
  50. 50.
    Zhou, T., Tulsiani, S., Sun, W., Malik, J., Efros, A.A.: View synthesis by appearance flow. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 286–301. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_18CrossRefGoogle Scholar
  51. 51.
    Zhu, J.-Y., Krähenbühl, P., Shechtman, E., Efros, A.A.: Generative visual manipulation on the natural image manifold. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 597–613. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46454-1_36CrossRefGoogle Scholar
  52. 52.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)Google Scholar
  53. 53.
    Zhu, J.Y., et al.: Toward multimodal image-to-image translation. In: Advances in Neural Information Processing Systems, vol. 30 (2017)Google Scholar
  54. 54.
    Zinke, A., Yuksel, C., Weber, A., Keyser, J.: Dual scattering approximation for fast multiple scattering in hair. ACM Trans. Graph. (Proceedings of SIGGRAPH 2008) 27(3), 32:1–32:10 (2008).  https://doi.org/10.1145/1360612.1360631CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Pinscreen Inc.Los AngelesUSA
  2. 2.University of Southern CaliforniaLos AngelesUSA
  3. 3.Adobe ResearchSan JoseUSA
  4. 4.Argo AIPittsburghUSA

Personalised recommendations