Advertisement

HairNet: Single-View Hair Reconstruction Using Convolutional Neural Networks

  • Yi ZhouEmail author
  • Liwen Hu
  • Jun Xing
  • Weikai Chen
  • Han-Wei Kung
  • Xin Tong
  • Hao Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11215)

Abstract

We introduce a deep learning-based method to generate full 3D hair geometry from an unconstrained image. Our method can recover local strand details and has real-time performance. State-of-the-art hair modeling techniques rely on large hairstyle collections for nearest neighbor retrieval and then perform ad-hoc refinement. Our deep learning approach, in contrast, is highly efficient in storage and can run 1000 times faster while generating hair with 30K strands. The convolutional neural network takes the 2D orientation field of a hair image as input and generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible hairstyles, and the visibility of each strand is also used as a weight term to improve the reconstruction accuracy. The encoder-decoder architecture of our network naturally provides a compact and continuous representation for hairstyles, which allows us to interpolate naturally between hairstyles. We use a large set of rendered synthetic hair models to train our network. Our method scales to real images because an intermediate 2D orientation field, automatically calculated from the real image, factors out the difference between synthetic and real hairs. We demonstrate the effectiveness and robustness of our method on a wide range of challenging real Internet pictures, and show reconstructed hair sequences from videos.

Keywords

Hair Reconstruction Real-time DNN 

Notes

Acknowledgements

We thank Weiyue Wang, Haoqi Li, Sitao Xiang and Tianye Li for giving us valuable suggestions in designing the algorithms and writing the paper. This work was supported in part by the ONR YIP grant N00014-17-S-FO14, the CONIX Research Center, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, the Andrew and Erna Viterbi Early Career Chair, the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005, Adobe, and Sony. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.

Supplementary material

474198_1_En_15_MOESM1_ESM.zip (32.7 mb)
Supplementary material 1 (zip 33449 KB)

References

  1. 1.
    Electronic Arts: The Sims Resource (2017). http://www.thesimsresource.com/
  2. 2.
    Beeler, T., et al.: Coupled 3d reconstruction of sparse facial hair and skin. ACM Trans. Graph. 31, 117:1–117:10 (2012).  https://doi.org/10.1145/2185520.2185613. http://graphics.ethz.ch/publications/papers/paperBee12.phpCrossRefGoogle Scholar
  3. 3.
    Cao, X., Wei, Y., Wen, F., Sun, J.: Face alignment by explicit shape regression. Int. J. Comput. Vis. 107(2), 177–190 (2014)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Chai, M., Shao, T., Wu, H., Weng, Y., Zhou, K.: Autohair: fully automatic hair modeling from a single image. ACM Trans. Graph. (TOG) 35(4), 116 (2016)CrossRefGoogle Scholar
  5. 5.
    Chai, M., Wang, L., Weng, Y., Jin, X., Zhou, K.: Dynamic hair manipulation in images and videos. ACM Trans. Graph. 32(4), 75:1–75:8 (2013).  https://doi.org/10.1145/2461912.2461990CrossRefGoogle Scholar
  6. 6.
    Chai, M., Wang, L., Weng, Y., Yu, Y., Guo, B., Zhou, K.: Single-view hair modeling for portrait manipulation. ACM Trans. Graph. 31(4), 116:1–116:8 (2012).  https://doi.org/10.1145/2185520.2185612CrossRefGoogle Scholar
  7. 7.
    Choe, B., Ko, H.: A statistical wisp model and pseudophysical approaches for interactivehairstyle generation. IEEE Trans. Vis. Comput. Graph. 11(2), 160–170 (2005).  https://doi.org/10.1109/TVCG.2005.20CrossRefGoogle Scholar
  8. 8.
    Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3D–R2N2: a unified approach for single and multi-view 3D object reconstruction. CoRR abs/1604.00449 (2016). http://arxiv.org/abs/1604.00449
  9. 9.
    Echevarria, J.I., Bradley, D., Gutierrez, D., Beeler, T.: Capturing and stylizing hair for 3d fabrication. ACM Trans. Graph. 33(4), 125:1–125:11 (2014).  https://doi.org/10.1145/2601097.2601133CrossRefGoogle Scholar
  10. 10.
    Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. CoRR abs/1612.00603 (2016). http://arxiv.org/abs/1612.00603
  11. 11.
    Fu, H., Wei, Y., Tai, C.L., Quan, L.: Sketching hairstyles. In: Proceedings of the 4th Eurographics Workshop on Sketch-Based Interfaces and Modeling, SBIM 2007, pp. 31–36. ACM, New York (2007).  https://doi.org/10.1145/1384429.1384439
  12. 12.
    Girdhar, R., Fouhey, D.F., Rodriguez, M., Gupta, A.: Learning a predictable and generative vector representation for objects. CoRR abs/1603.08637 (2016). http://arxiv.org/abs/1603.08637
  13. 13.
    Guerrero, P., Kleiman, Y., Ovsjanikov, M., Mitra, N.J.: PCPNET: learning local shape properties from raw point clouds. Comput. Graph. Forum (Eurographics) 37, 75–85 (2017)CrossRefGoogle Scholar
  14. 14.
    Hadap, S., et al.: Strands and hair: modeling, animation, and rendering. In: ACM SIGGRAPH 2007 Courses, pp. 1–150. ACM (2007)Google Scholar
  15. 15.
    Häne, C., Tulsiani, S., Malik, J.: Hierarchical surface prediction for 3d object reconstruction. CoRR abs/1704.00710 (2017). http://arxiv.org/abs/1704.00710
  16. 16.
    Herrera, T.L., Zinke, A., Weber, A.: Lighting hair from the inside: a thermal approach to hair reconstruction. ACM Trans. Graph. 31(6), 146:1–146:9 (2012).  https://doi.org/10.1145/2366145.2366165CrossRefGoogle Scholar
  17. 17.
    Hu, L., Ma, C., Luo, L., Li, H.: Robust hair capture using simulated examples. ACM Trans. Graph. 33(4), 126 (2014). Proceedings SIGGRAPH 2014CrossRefGoogle Scholar
  18. 18.
    Hu, L., Ma, C., Luo, L., Li, H.: Single-view hair modeling using a hairstyle database. ACM Trans. Graph. (TOG) 34(4), 125 (2015)Google Scholar
  19. 19.
    Hu, L., Ma, C., Luo, L., Wei, L.Y., Li, H.: Capturing braided hairstyles. ACM Trans. Graph. 33(6), 225 (2014). Proceedings SIGGRAPH Asia 2014Google Scholar
  20. 20.
    Hu, L., et al.: Avatar digitization from a single image for real-time rendering. ACM Trans. Graph. (TOG) 36(6), 195 (2017)Google Scholar
  21. 21.
    Jackson, A.S., Bulat, A., Argyriou, V., Tzimiropoulos, G.: Large pose 3d face reconstruction from a single image via direct volumetric CNN regression. In: International Conference on Computer Vision (2017)Google Scholar
  22. 22.
    Jakob, W., Moon, J.T., Marschner, S.: Capturing hair assemblies fiber by fiber. ACM Trans. Graph. 28(5), 164:1–164:9 (2009).  https://doi.org/10.1145/1618452.1618510CrossRefGoogle Scholar
  23. 23.
    Kim, T.Y., Neumann, U.: Interactive multiresolution hair modeling and editing. ACM Trans. Graph. 21(3), 620–629 (2002).  https://doi.org/10.1145/566654.566627CrossRefGoogle Scholar
  24. 24.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  25. 25.
    Li, H., et al.: Facial performance sensing head-mounted display. ACM Trans. Graph. (TOG) 34(4), 47 (2015)MathSciNetGoogle Scholar
  26. 26.
    Luo, L., Li, H., Rusinkiewicz, S.: Structure-aware hair capture. ACM Trans. Graph. 32(4), 76 (2013). Proceedings SIGGRAPH 2013CrossRefGoogle Scholar
  27. 27.
    Olszewski, K., Lim, J.J., Saito, S., Li, H.: High-fidelity facial and speech animation for VR HMDS. ACM Trans. Graph. 35(6), 221 (2016). Proceedings SIGGRAPH Asia 2016CrossRefGoogle Scholar
  28. 28.
    Paris, S., et al.: Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27(3), 30:1–30:9 (2008).  https://doi.org/10.1145/1360612.1360629CrossRefGoogle Scholar
  29. 29.
    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593 (2016)
  30. 30.
    Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413 (2017)
  31. 31.
    Riegler, G., Ulusoy, A.O., Geiger, A.: OctNet: learning deep 3d representations at high resolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 3 (2017)Google Scholar
  32. 32.
    Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: efficient convolutional architectures for high-resolution 3d outputs. CoRR, arXiv:abs/1703.09438 (2017)
  33. 33.
    Tulsiani, S., Zhou, T., Efros, A.A., Malik, J.: Multi-view supervision for single-view reconstruction via differentiable ray consistency. CoRR abs/1704.06254 (2017). http://arxiv.org/abs/1704.06254
  34. 34.
    Wang, L., Yu, Y., Zhou, K., Guo, B.: Example-based hair geometry synthesis. ACM Trans. Graph. 28(3), 56:1–56:9 (2009)Google Scholar
  35. 35.
    Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: O-CNN: Octree-based convolutional neural networks for 3d shape analysis. ACM Trans. Graph. 36(4), 72:1–72:11 (2017).  https://doi.org/10.1145/3072959.3073608Google Scholar
  36. 36.
    Ward, K., Bertails, F., Kim, T.Y., Marschner, S.R., Cani, M.P., Lin, M.C.: A survey on hair modeling: styling, simulation, and rendering. IEEE Trans. Vis. Comput. Graph. 13, 213–234 (2006)CrossRefGoogle Scholar
  37. 37.
    Weng, Y., Wang, L., Li, X., Chai, M., Zhou, K.: Hair interpolation for portrait morphing. Comput. Graph. Forum (2013).  https://doi.org/10.1111/cgf.12214CrossRefGoogle Scholar
  38. 38.
    Xu, Z., Wu, H.T., Wang, L., Zheng, C., Tong, X., Qi, Y.: Dynamic hair capture using spacetime optimization. ACM Trans. Graph. 33(6), 224:1–224:11 (2014).  https://doi.org/10.1145/2661229.2661284CrossRefGoogle Scholar
  39. 39.
    Yu, Y.: Modeling realistic virtual hairstyles. In: Proceedings of Ninth Pacific Conference on Computer Graphics and Applications, pp. 295–304. IEEE (2001)Google Scholar
  40. 40.
    Yuksel, C., Schaefer, S., Keyser, J.: Hair meshes. ACM Trans. Graph. 28(5), 166:1–166:7 (2009).  https://doi.org/10.1145/1618452.1618512CrossRefGoogle Scholar
  41. 41.
    Zhang, M., Chai, M., Wu, H., Yang, H., Zhou, K.: A data-driven approach to four-view image-based hair modeling. ACM Trans. Graph. 36(4), 156:1–156:11 (2017).  https://doi.org/10.1145/3072959.3073627Google Scholar
  42. 42.
    Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Yi Zhou
    • 1
    Email author
  • Liwen Hu
    • 1
  • Jun Xing
    • 2
  • Weikai Chen
    • 2
  • Han-Wei Kung
    • 3
  • Xin Tong
    • 4
  • Hao Li
    • 1
    • 2
    • 3
  1. 1.University of Southern CaliforniaLos AngelesUSA
  2. 2.USC Institute for Creative TechnologiesLos AngelesUSA
  3. 3.PinscreenSanta MonicaUSA
  4. 4.Microsoft Research AsiaBeijingChina

Personalised recommendations