Advertisement

ReMagicMirror: Action Learning Using Human Reenactment with the Mirror Metaphor

  • Fabian Lorenzo DayritEmail author
  • Ryosuke Kimura
  • Yuta Nakashima
  • Ambrosio Blanco
  • Hiroshi Kawasaki
  • Katsushi Ikeuchi
  • Tomokazu Sato
  • Naokazu Yokoya
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10132)

Abstract

We propose ReMagicMirror, a system to help people learn actions (e.g., martial arts, dances). We first capture the motions of a teacher performing the action to learn, using two RGB-D cameras. Next, we fit a parametric human body model to the depth data and texture it using the color data, reconstructing the teacher’s motion and appearance. The learner is then shown the ReMagicMirror system, which acts as a mirror. We overlay the teacher’s reconstructed body on top of this mirror in an augmented reality fashion. The learner is able to intuitively manipulate the reconstruction’s viewpoint by simply rotating her body, allowing for easy comparisons between the learner and the teacher. We perform a user study to evaluate our system’s ease of use, effectiveness, quality, and appeal.

Keywords

3D human reconstruction Human reenactment RGB-D sensors 

Notes

Acknowledgement

This work is supported by MSR CORE 11/12 Project.

References

  1. 1.
    Anderson, F., Grossman, T., Matejka, J., Fitzmaurice, G.: YouMove: enhancing movement training with an augmented reality mirror. In: Proceedings of ACM Symposium on User Interface Software and Technology, pp. 311–320 (2013)Google Scholar
  2. 2.
    Anguelov, D., Koller, D., Pang, H.C., Srinivasan, P., Thrun, S.: Recovering articulated object models from 3D range data. In: Proceedings of Conference on Uncertainty in Artificial Intelligence, pp. 18–26 (2004)Google Scholar
  3. 3.
    Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: SCAPE: shape completion and animation of people. ACM Trans. Graph. 24, 408–416 (2005)CrossRefGoogle Scholar
  4. 4.
    Blum, T., Kleeberger, V., Bichlmeier, C., Navab, N.: mirracle: An augmented reality magic mirror system for anatomy education. In: Proceedings of IEEE Virtual Reality Workshops, pp. 115–116 (2012)Google Scholar
  5. 5.
    Bogo, F., Black, M.J., Loper, M., Romero, J.: Detailed full-body reconstructions of moving people from monocular RGB-D sequences. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2300–2308 (2015)Google Scholar
  6. 6.
    Chen, Y., Liu, Z., Zhang, Z.: Tensor-based human body modeling. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 105–112 (2013)Google Scholar
  7. 7.
    Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., Ranzuglia, G.: Meshlab: an open-source mesh processing tool. In: Eurographics Italian Chapter Conference, pp. 129–136 (2008)Google Scholar
  8. 8.
    Dayrit, F.L., Nakashima, Y., Sato, T., Yokoya, N.: Increasing pose comprehension through augmented reality reenactment. Multimedia Tools Appl. 1–22 (2015). doi: 10.1007/s11042-015-3116-1
  9. 9.
    Dou, M., Taylor, J., Fuchs, H., Fitzgibbon, A., Izadi, S.: 3D scanning deformable objects with a single RGBD sensor. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 493–501 (2015)Google Scholar
  10. 10.
    Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B., Seidel, H.P.: A statistical model of human pose and body shape. Comput. Graph. Forum 28, 337–346 (2009)CrossRefGoogle Scholar
  11. 11.
    Henderson, S.J., Feiner, S.K.: Augmented reality in the psychomotor phase of a procedural task. In: Proceedings of IEEE International Symposium on Mixed and Augmented Reality, pp. 191–200 (2011)Google Scholar
  12. 12.
    Hondori, H.M., Khademi, M., Dodakian, L., Cramer, S.C., Lopes, C.V.: A spatial augmented reality rehab system for post-stroke hand rehabilitation. In: Proceedings of Medicine Meets Virtual Reality Conference, pp. 279–285 (2013)Google Scholar
  13. 13.
    Innmann, M., Zollhöfer, M., Nießner, M., Theobalt, C., Stamminger, M.: VolumeDeform: real-time volumetric non-rigid reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 362–379. Springer, Cham (2016). doi: 10.1007/978-3-319-46484-8_22 CrossRefGoogle Scholar
  14. 14.
    Newcombe, R., Fox, D., Seitz, S.: DynamicFusion: reconstruction and tracking of non-rigid scenes in real-time. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 343–352 (2015)Google Scholar
  15. 15.
    Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., David Kim, A.J.D., Kohli, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: real-time dense surface mapping and tracking. In: Proceedings of IEEE International Symposium on Mixed and Augmented Reality, pp. 127–136 (2011)Google Scholar
  16. 16.
    Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., Moore, R.: Real-time human pose recognition in parts from single depth images. Commun. ACM 56, 116–124 (2013)CrossRefGoogle Scholar
  17. 17.
    Weiss, A., Hirshberg, D., Black, M.J.: Home 3D body scans from noisy image and range data. In: Proceedings of IEEE International Conference on Computer Vision, pp. 1951–1958 (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Fabian Lorenzo Dayrit
    • 1
    Email author
  • Ryosuke Kimura
    • 3
  • Yuta Nakashima
    • 1
  • Ambrosio Blanco
    • 2
  • Hiroshi Kawasaki
    • 3
  • Katsushi Ikeuchi
    • 2
  • Tomokazu Sato
    • 1
  • Naokazu Yokoya
    • 1
  1. 1.Nara Institute of Science and TechnologyNaraJapan
  2. 2.Microsoft Research AsiaBeijingChina
  3. 3.Kagoshima UniversityKagoshimaJapan

Personalised recommendations