Advertisement

Signal, Image and Video Processing

, Volume 10, Issue 4, pp 783–790 | Cite as

Time-coherent 3D animation reconstruction from RGB-D video

  • Naveed Ahmed
  • Salam Khalifa
Original Paper

Abstract

We present a new method to reconstruct a time-coherent 3D animation from RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in the form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. Afterward, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vector-based dynamic alignment method then fully reconstructs a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using a novel error function, in addition to the standard techniques in the literature, and compared our method to existing methods in the literature. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.

Keywords

3D video 3D animation RGB-D video Temporally coherent 3D animation 

Supplementary material

11760_2015_813_MOESM1_ESM.txt (0 kb)
Supplementary material 1 (txt 1 KB)

References

  1. 1.
    Carranza, J., Theobalt, C., Magnor, M.A., Seidel, H.-P.: Free-viewpoint video of human actors. ACM Trans. Graph. 22(3), 569–577 (2003)CrossRefGoogle Scholar
  2. 2.
    de Aguiar, E., Stoll, C., Theobalt, C., Ahmed, N., Seidel, H.-P., Thrun, S.: Performance capture from sparse multi-view video. ACM Trans. Graph. 27(3), 98:1–98:10 (2008)Google Scholar
  3. 3.
    Kim, Y.M., Chan, D., Theobalt, C., Thrun, S.: Design and calibration of a multi-view tof sensor fusion system, In: CVPR Workshop (2008)Google Scholar
  4. 4.
    MICROSOFT: Kinect for microsoft windows and xbox 360. http://www.kinectforwindows.org/ (2010)
  5. 5.
    Ahmed, N.: A system for 360 degree acquisition and 3D animation reconstruction using multiple rgb-d cameras. In: Proceedings of the 25th International Conference on Computer Animation and Social Agents (CASA), Casa’12 (2012)Google Scholar
  6. 6.
    Ahmed, N., Theobalt, C., Rössl, C., Thrun, S., Seidel, H.-P.: Dense correspondence finding for parametrization-free animation reconstruction from video. In: CVPR (2008)Google Scholar
  7. 7.
    Debevec, P.E., Hawkins, T., Tchou, C., Duiker, H.-P., Sarokin, W., Sagar, M.: Acquiring the reflectance field of a human face. In: SIGGRAPH, pp. 145–156 (2000)Google Scholar
  8. 8.
    Hawkins, T., Einarsson, P., Debevec, P.E.: A dual light stage. In: EGSR, pp. 91–98 (2005)Google Scholar
  9. 9.
    Theobalt, C., Ahmed, N., Ziegler, G., Seidel, H.-P.: High-quality reconstruction of virtual actors from multi-view video streams. IEEE Signal Process. Mag. 24(6), 45–57 (2007)CrossRefGoogle Scholar
  10. 10.
    Vlasic, D., Baran, I., Matusik, W., Popovic, J.: Articulated mesh animation from multi-view silhouettes. ACM Trans. Graph. 27(3), 97:1–97:9 (2008)Google Scholar
  11. 11.
    Tevs, A., Berner, A., Wand, M., Ihrke, I., Seidel, H.-P.: Intrinsic shape matching by planned landmark sampling. In: Eurographics (2011)Google Scholar
  12. 12.
    Huang, P., Hilton, A., Starck, J.: Shape similarity for 3d video sequences of people. Int. J. Comput. Vis. 89(2–3), 362–381 (2010)CrossRefGoogle Scholar
  13. 13.
    Hilaga, M., Shinagawa, Y., Kohmura, T., Kunii, T.L.: Topology matching for fully automatic similarity estimation of 3d shapes. In: SIGGRAPH ’01, pp. 203–212, New York, NY, USA. ACM (2001)Google Scholar
  14. 14.
    Cagniart, C., Boyer, E., Ilic, S.: Iterative mesh deformation for dense surface tracking. In: ICCV Workshops, ICCV’09 (2009)Google Scholar
  15. 15.
    Varanasi, K., Zaharescu, A., Boyer, E., Horaud, R.: Temporal surface tracking using mesh evolution. In: ECCV’08, pp. 30–43. Berlin (2008)Google Scholar
  16. 16.
    Kim, Y.M., Theobalt, C., Diebel, J., Kosecka, J., Micusik, B., Thrun, S.: Multi-view image and tof sensor fusion for dense 3d reconstruction. In: 3DIM, pp. 1542–1549, Kyoto, Japan. IEEE (2009)Google Scholar
  17. 17.
    Castaneda, V., Mateus, D., Navab, N.: Stereo time-of-flight. In: ICCV (2011)Google Scholar
  18. 18.
    Weiss, A., Hirshberg, D., Black, M.J.: Home 3d body scans from noisy image and range data. In: ICCV (2011)Google Scholar
  19. 19.
    Baak, A., Muller, M., Bharaj, G., Seidel, H.-P., Theobalt, C.: A data-driven approach for real-time full body pose reconstruction from a depth camera. In: ICCV (2011)Google Scholar
  20. 20.
    Girshick, R., Shotton, J., Kohli, P., Criminisi, A., Fitzgibbon, A.: Efficient regression of general-activity human poses from depth images. In: ICCV (2011)Google Scholar
  21. 21.
    Berger, K., Ruhl, K., Schroeder, Y., Bruemmer, C., Scholz, A., Magnor, M.A.: Markerless motion capture using multiple color-depth sensors. In: VMV, pp. 317–324 (2011)Google Scholar
  22. 22.
    Rusu, R.B., Cousins, S.: 3D is here: Point Cloud Library (PCL). In: ICRA (2011)Google Scholar
  23. 23.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: ICCV, pp. 1150–1157 (1999)Google Scholar
  24. 24.
    Bernardin, K., Elbs, A., Stiefelhagen R.: Multiple object tracking performance metrics and evaluation in a smart room environment. In: 6th IEEE International Workshop on Visual Surveillance, VS 2006, Graz, Austria (2006)Google Scholar

Copyright information

© Springer-Verlag London 2015

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of SharjahSharjahUAE

Personalised recommendations