A Worked-Out Experience in Programming Humanoid Robots via the Kinetography Laban
- 1.2k Downloads
This chapter discusses the possibility of using Laban notation to program humanoid robots. Laban notation documents human movements by a sequence of symbols that express movements as defined in the physical space. We show, by reasoning around the simple action of “taking a ball”, the flexibility of the notation that is able to describe an action with different level of details, depending on the final objective of the notation. These characteristics make Laban notation suitable as a high level language and as a motion segmentation tool for humanoid robot programming and control. The main problem in robotics is to express actions that are defined and operate in the physical space in terms of robot motions that originate in the robot motor control space. This is the fundamental robotics issue of inversion. We will first show how symbols used by Laban to describe human gestures can be translated in terms of actions for the robot by using a framework called Stack of Tasks. We will then report on an experience tending to implement on a simulated humanoid platform the notation score of a “Tutting Dance” executed by a dancer. Once the whole movement has been implemented on the robot, it has been again notated by using Laban notation. The comparison between both scores shows that robot’s movements are slightly different from dancer’s ones. We then discuss about plausible origins of these differences.
KeywordsHumanoid Robotics Dance Tutors Laban Score Direction Symbols Dance Notation
This work is supported by ERC-ADG project 340050 Actanthrope. Authors thank Noëlle Simonet, professor of the Kinetography Laban at CNMDP (Conservatoire National de Musique et de Danse de Paris), for reviewing the Laban scores, and Tiphaine Jahier, dancer and Laban notator, for her participation to read notations and perform actions.
- 1.B. Choi, Y. Chen, Humanoid Motion Description Language (Lund University Cognitive Studies, 2002)Google Scholar
- 2.K. Kahol, P. Tripathi, S. Panchanathan, Automated gesture segmentation from dance sequences, in 2004 Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition (IEEE, 2004), pp. 883–888Google Scholar
- 3.K. Kahol, P. Tripathi, S. Panchanathan, T. Rikakis, Gesture segmentation in complex motion sequences, in Proceedings IEEE International Conference on Image Processing (2003), pp. 105–108Google Scholar
- 4.A. Hutchinson Guest, Choreo-Graphics, A Comparison of Dance Notation Systems From the Fifteenth Century to the Present (Gordon and Breach Science Publishers S.A., New York, 1989)Google Scholar
- 6.E. Yoshida, J.P. Laumond, C. Esteves, O. Kanoun, A. Mallet, K. Sakaguchi, T. Yokoi, Motion autonomy for humanoids, experiments on hrp-2 no. 14. Comp. Animation Virtual Worlds 20, 5–6 (2009)Google Scholar
- 10.C. Samson, B. Espiau, M. Le Borgne, Robot Control: The Task Function Approach (Oxford University Press, Oxford, 1991)Google Scholar
- 13.A.H. Guest, Labanotation: The System of Analyzing and Recording Movement (Psychology Press, 2005)Google Scholar
- 14.A. Knust. Dictionnaire usuel de Cinétographie Laban (Labanotation). Ressouvenances (2011)Google Scholar
- 15.T. Flash, N. Hogan, The coordination of arm movements—an experimentally confirmed mathematical-model. J. Neurosci. 5(7), 1688–1703 (1985)Google Scholar
- 16.P.J. Stapley, T. Pozzo, G. Cheron, A. Grishin, Does the coordination between posture and movement during human whole-body reaching ensure center of mass stabilization? Exp. Brain Res. 129(1), 134–146 (1999)Google Scholar
- 19.R. Parent, Computer Animation: Algorithms and Techniques (Morgan Kaufmann Publishers, Elsevier, 2001)Google Scholar