Advertisement

Neural Hair Rendering

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12363)

Abstract

In this paper, we propose a generic neural-based hair rendering pipeline that can synthesize photo-realistic images from virtual 3D hair models. Unlike existing supervised translation methods that require model-level similarity to preserve consistent structure representation for both real images and fake renderings, our method adopts an unsupervised solution to work on arbitrary hair models. The key component of our method is a shared latent space to encode appearance-invariant structure information of both domains, which generates realistic renderings conditioned by extra appearance inputs. This is achieved by domain-specific pre-disentangled structure representation, partially shared domain encoder layers and a structure discriminator. We also propose a simple yet effective temporal conditioning method to enforce consistency for video sequence generation. We demonstrate the superiority of our method by testing it on a large number of portraits and comparing it with alternative baselines and state-of-the-art unsupervised image translation methods.

Keywords

Neural rendering Unsupervised image translation 

Supplementary material

504473_1_En_22_MOESM1_ESM.pdf (4.9 mb)
Supplementary material 1 (pdf 5018 KB)

References

  1. 1.
    Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: CVPR, pp. 95–104 (2017)Google Scholar
  2. 2.
    Cao, C., Chai, M., Woodford, O.J., Luo, L.: Stabilized real-time face tracking via a learned dynamic rigidity prior. ACM Trans. Graph. 37(6), 233:1–233:11 (2018)Google Scholar
  3. 3.
    Chai, M., Luo, L., Sunkavalli, K., Carr, N., Hadap, S., Zhou, K.: High-quality hair modeling from a single portrait photo. ACM Trans. Graph. 34(6), 204:1–204:10 (2015)CrossRefGoogle Scholar
  4. 4.
    Chai, M., Shao, T., Wu, H., Weng, Y., Zhou, K.: AutoHair: fully automatic hair modeling from a single image. ACM Trans. Graph. 35(4), 116:1–116:12 (2016)CrossRefGoogle Scholar
  5. 5.
    Chai, M., Wang, L., Weng, Y., Jin, X., Zhou, K.: Dynamic hair manipulation in images and videos. ACM Trans. Graph. 32(4), 75:1–75:8 (2013)CrossRefGoogle Scholar
  6. 6.
    Chai, M., Wang, L., Weng, Y., Yu, Y., Guo, B., Zhou, K.: Single-view hair modeling for portrait manipulation. ACM Trans. Graph. 31(4), 116:1–116:8 (2012)CrossRefGoogle Scholar
  7. 7.
    Chai, M., Zheng, C., Zhou, K.: A reduced model for interactive hairs. ACM Trans. Graph. 33(4), 124:1–124:11 (2014)CrossRefGoogle Scholar
  8. 8.
    Chen, Q., Koltun, V.: Photographic image synthesis with cascaded refinement networks. In: ICCV, pp. 1520–1529 (2017)Google Scholar
  9. 9.
    Chen, T.Q., Schmidt, M.: Fast patch-based style transfer of arbitrary style. CoRR abs/1612.04337 (2016)Google Scholar
  10. 10.
    Chen, Y., Chen, W., Chen, Y., Tsai, B., Wang, Y.F., Sun, M.: No more discrimination: cross city adaptation of road scene segmenters. In: ICCV, pp. 2011–2020 (2017)Google Scholar
  11. 11.
    d’Eon, E., François, G., Hill, M., Letteri, J., Aubry, J.: An energy-conserving hair reflectance model. Comput. Graph. Forum 30(4), 1181–1187 (2011)CrossRefGoogle Scholar
  12. 12.
    Dundar, A., Liu, M., Wang, T., Zedlewski, J., Kautz, J.: Domain stylization: a strong, simple baseline for synthetic to real image domain adaptation. CoRR abs/1807.09384 (2018)Google Scholar
  13. 13.
    Fernando, B., Habrard, A., Sebban, M., Tuytelaars, T.: Unsupervised visual domain adaptation using subspace alignment. In: ICCV, pp. 2960–2967 (2013)Google Scholar
  14. 14.
    Ganin, Y., Lempitsky, V.S.: Unsupervised domain adaptation by backpropagation. In: ICML, vol. 37, pp. 1180–1189 (2015)Google Scholar
  15. 15.
    Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, 59:1–59:35 (2016)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: CVPR, pp. 2414–2423 (2016)Google Scholar
  17. 17.
    Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: CVPR, pp. 2066–2073 (2012)Google Scholar
  18. 18.
    Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: an unsupervised approach. In: ICCV, pp. 999–1006 (2011)Google Scholar
  19. 19.
    Herrera, T.L., Zinke, A., Weber, A.: Lighting hair from the inside: a thermal approach to hair reconstruction. ACM Trans. Graph. 31(6), 146:1–146:9 (2012)CrossRefGoogle Scholar
  20. 20.
    Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.: Image analogies. In: SIGGRAPH, pp. 327–340 (2001)Google Scholar
  21. 21.
    Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: NIPS, pp. 6626–6637 (2017)Google Scholar
  22. 22.
    Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. ICML, vol. 80, pp. 1994–2003 (2018)Google Scholar
  23. 23.
    Hu, L., Ma, C., Luo, L., Li, H.: Robust hair capture using simulated examples. ACM Trans. Graph. 33(4), 126:1–126:10 (2014)CrossRefGoogle Scholar
  24. 24.
    Hu, L., Ma, C., Luo, L., Li, H.: Single-view hair modeling using a hairstyle database. ACM Trans. Graph. 34(4), 125:1–125:9 (2015)Google Scholar
  25. 25.
    Huang, X., Belongie, S.J.: Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV, pp. 1510–1519 (2017)Google Scholar
  26. 26.
    Huang, X., Liu, M., Belongie, S.J., Kautz, J.: Multimodal unsupervised image-to-image translation. In: ECCV, vol. 11207, pp. 179–196 (2018)Google Scholar
  27. 27.
    Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 5967–5976 (2017)Google Scholar
  28. 28.
    Jo, Y., Park, J.: SC-FEGAN: face editing generative adversarial network with user’s sketch and color. In: ICCV, pp. 1745–1753 (2019)Google Scholar
  29. 29.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  30. 30.
    Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR, pp. 4401–4410 (2019)Google Scholar
  31. 31.
    Kim, T., Cha, M., Kim, H., Lee, J.K., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: ICML, vol. 70, pp. 1857–1865 (2017)Google Scholar
  32. 32.
    Kulis, B., Saenko, K., Darrell, T.: What you saw is not what you get: domain adaptation using asymmetric kernel transforms. In: CVPR, pp. 1785–1792 (2011)Google Scholar
  33. 33.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 105–114 (2017)Google Scholar
  34. 34.
    Lee, C., Liu, Z., Wu, L., Luo, P.: MaskGAN: towards diverse and interactive facial image manipulation. CoRR abs/1907.11922 (2019)Google Scholar
  35. 35.
    Lee, H., Tseng, H., Huang, J., Singh, M., Yang, M.: Diverse image-to-image translation via disentangled representations. ECCV, vol. 11205, pp. 36–52 (2018)Google Scholar
  36. 36.
    Li, C., Wand, M.: Precomputed real-time texture synthesis with markovian generative adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 702–716. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_43CrossRefGoogle Scholar
  37. 37.
    Li, Y., Wang, N., Liu, J., Hou, X.: Demystifying neural style transfer. In: IJCAI, pp. 2230–2236 (2017)Google Scholar
  38. 38.
    Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.: Diversified texture synthesis with feed-forward networks. In: CVPR, pp. 266–274 (2017)Google Scholar
  39. 39.
    Liu, M., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: NeurIPS, pp. 700–708 (2017)Google Scholar
  40. 40.
    Liu, M., et al.: Few-shot unsupervised image-to-image translation. In: ICCV, pp. 10550–10559 (2019)Google Scholar
  41. 41.
    Liu, M., Tuzel, O.: Coupled generative adversarial networks. In: NIPS, pp. 469–477 (2016)Google Scholar
  42. 42.
    Luo, L., Li, H., Rusinkiewicz, S.: Structure-aware hair capture. ACM Trans. Graph. 32(4), 76:1–76:12 (2013)CrossRefGoogle Scholar
  43. 43.
    Marschner, S.R., Jensen, H.W., Cammarano, M., Worley, S., Hanrahan, P.: Light scattering from human hair fibers. ACM Trans. Graph. 22(3), 780–791 (2003)CrossRefGoogle Scholar
  44. 44.
    Moon, J.T., Marschner, S.R.: Simulating multiple scattering in hair using a photon mapping approach. ACM Trans. Graph. 25(3), 1067–1074 (2006)CrossRefGoogle Scholar
  45. 45.
    Moon, J.T., Walter, B., Marschner, S.: Efficient multiple scattering in hair using spherical harmonics. ACM Trans. Graph. 27(3), 31 (2008)CrossRefGoogle Scholar
  46. 46.
    Olszewski, K., et al.: Intuitive, interactive beard and hair synthesis with generative models. In: CVPR, pp. 7446–7456 (2020)Google Scholar
  47. 47.
    Paris, S., et al.: Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27(3), 30 (2008)CrossRefGoogle Scholar
  48. 48.
    Park, T., Liu, M., Wang, T., Zhu, J.: Semantic image synthesis with spatially-adaptive normalization. In: CVPR, pp. 2337–2346 (2019)Google Scholar
  49. 49.
    Qiu, H., Wang, C., Zhu, H., Zhu, X., Gu, J., Han, X.: Two-phase hair image synthesis by self-enhancing generative model. Comput. Graph. Forum 38(7), 403–412 (2019)CrossRefGoogle Scholar
  50. 50.
    Ren, Z., Zhou, K., Li, T., Hua, W., Guo, B.: Interactive hair rendering under environment lighting. ACM Trans. Graph. 29(4), 55:1–55:8 (2010)CrossRefGoogle Scholar
  51. 51.
    Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  52. 52.
    Sadeghi, I., Pritchett, H., Jensen, H.W., Tamstorf, R.: An artist friendly hair shading system. ACM Trans. Graph. 29(4), 56:1–56:10 (2010)CrossRefGoogle Scholar
  53. 53.
    Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15561-1_16CrossRefGoogle Scholar
  54. 54.
    Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Scribbler: controlling deep image synthesis with sketch and color. In: CVPR, pp. 6836–6845 (2017)Google Scholar
  55. 55.
    Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: CVPR, pp. 2242–2251 (2017)Google Scholar
  56. 56.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)Google Scholar
  57. 57.
    Svanera, M., Muhammad, U.R., Leonardi, R., Benini, S.: Figaro, hair detection and segmentation in the wild. In: ICIP, pp. 933–937 (2016)Google Scholar
  58. 58.
    Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. In: ICLR (2017)Google Scholar
  59. 59.
    Tan, Z., et al.: MichiGAN: multi-input-conditioned hair image generation for portrait editing. ACM Trans. Graph. 39(4), 95:1–95:13 (2020)CrossRefGoogle Scholar
  60. 60.
    Tsai, Y., Hung, W., Schulter, S., Sohn, K., Yang, M., Chandraker, M.: Learning to adapt structured output space for semantic segmentation. In: CVPR, pp. 7472–7481 (2018)Google Scholar
  61. 61.
    Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR, pp. 2962–2971 (2017)Google Scholar
  62. 62.
    Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: Maximizing for domain invariance. CoRR abs/1412.3474 (2014)Google Scholar
  63. 63.
    Ulyanov, D., Lebedev, V., Vedaldi, A., Lempitsky, V.S.: Texture networks: feed-forward synthesis of textures and stylized images. In: ICML, vol. 48, pp. 1349–1357 (2016)Google Scholar
  64. 64.
    Wang, T., Liu, M., Zhu, J., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: CVPR, pp. 8798–8807 (2018)Google Scholar
  65. 65.
    Ward, K., Bertails, F., Kim, T., Marschner, S.R., Cani, M., Lin, M.C.: A survey on hair modeling: styling, simulation, and rendering. IEEE Trans. Vis. Comput. Graph. 13(2), 213–234 (2007)CrossRefGoogle Scholar
  66. 66.
    Wei, L., Hu, L., Kim, V.G., Yumer, E., Li, H.: Real-time hair rendering using sequential adversarial networks. In: ECCV, vol. 11208, pp. 105–122 (2018)Google Scholar
  67. 67.
    Xu, K., Ma, L., Ren, B., Wang, R., Hu, S.: Interactive hair rendering and appearance editing under environment lighting. ACM Trans. Graph. 30(6), 173 (2011)Google Scholar
  68. 68.
    Yan, L., Tseng, C., Jensen, H.W., Ramamoorthi, R.: Physically-accurate fur reflectance: modeling, measurement and rendering. ACM Trans. Graph. 34(6), 185:1–185:13 (2015)CrossRefGoogle Scholar
  69. 69.
    Yi, Z., Zhang, H.R., Tan, P., Gong, M.: DualGAN: unsupervised dual learning for image-to-image translation. In: ICCV, pp. 2868–2876 (2017)Google Scholar
  70. 70.
    Yuksel, C., Schaefer, S., Keyser, J.: Hair meshes. ACM Trans. Graph. 28(5), 166 (2009)CrossRefGoogle Scholar
  71. 71.
    Zhang, M., Chai, M., Wu, H., Yang, H., Zhou, K.: A data-driven approach to four-view image-based hair modeling. ACM Trans. Graph. 36(4), 156:1–156:11 (2017)Google Scholar
  72. 72.
    Zhou, Y., et al.: HairNet: single-view hair reconstruction using convolutional neural networks. In: ECCV, vol. 11215, pp. 249–265 (2018)Google Scholar
  73. 73.
    Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)Google Scholar
  74. 74.
    Zinke, A., Yuksel, C., Weber, A., Keyser, J.: Dual scattering approximation for fast multiple scattering in hair. ACM Trans. Graph. 27(3), 32 (2008)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Snap Inc.Santa MonicaUSA

Personalised recommendations