Towards a learnt neural body schema for dexterous coordination of action in humanoid and industrial robots


During any goal oriented behavior the dual processes of generation of dexterous actions and anticipation of the consequences of potential actions must seamlessly alternate. This article presents a unified neural framework for generation and forward simulation of goal directed actions and validates the architecture through diverse experiments on humanoid and industrial robots. The basic idea is that actions are consequences of an simulation process that animates the internal model of the body (namely the body schema), in the context of intended goals/constraints. Specific focus is on (a) Learning: how the internal model of the body can be acquired by any robotic embodiment and extended to coordinated tools; (b) Configurability: how diverse forward/inverse models of action can be ‘composed’ at runtime by coupling/decoupling different body (body \(+\) tool) chains with task relevant goals and constraints represented as multi-referential force fields; and (c) Computational simplicity: how both the synthesis of motor commands to coordinate highly redundant systems and the ensuing forward simulations are realized through well-posed computations without kinematic inversions. The performance of the neural architecture is demonstrated through a range of motor tasks on a 53-DoFs robot iCub and two industrial robots performing real world assembly with emphasis on dexterity, accuracy, speed, obstacle avoidance, multiple task-specific constraints, task-based configurability. Putting into context other ideas in motor control like the Equilibrium Point Hypothesis, Optimal Control, Active Inference and emerging studies from neuroscience, the relevance of the proposed framework is also discussed.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11


  1. 1.

    The difference between body image and body schema is disputed and is somehow fuzzy. For our purpose we assume that they are two sides of the same coin: the former one stresses the static component, mainly based on proprioceptive information whereas the latter is related to the dynamic synergy formation function.

  2. 2.

    Condition to have a bounded acceleration, \(\partial ^{2}\xi /\partial t^{2}=-\beta \gamma ^{2}(\xi (1-\xi ))^{2\beta -1}(1-2\xi )\), at equilibrium point is, \(0.5<\beta <1\). The Jerk of \(\xi \hbox { (t)},\partial ^{3}\xi /\partial t^{3}=\beta \gamma ^{3}(\xi (1-\xi ))^{3\beta -2}\{(2\beta -1)(1-2\xi )^{2}-2\xi (1-\xi ))\}\) imposes an additional restriction of having \(0.66< \beta <1\) for bounded jerk.

  3. 3.

    Non Lipschitzian systems have point attractors of infinite stability in the sense that the gradient of their Lyapnov function diverges at equilibrium point, a consequence is that they reach equilibrium in finite time (it is a terminal attractor). \(\partial \dot{\xi }/\partial \xi =\beta \gamma (\xi (1-\xi ))^{\beta -1}(1-2\xi )\), as \(\beta <1\), the expression tends to 8, at equilibrium points.


  1. Arimoto, S., et al. (2005). Natural resolution of ill-posedness of inverse kinematics for redundant robots: A challenge to Bernstein’s degrees-of-freedom problem. Advanced Robotics, 19(4), 401–434.

  2. Asatryan, D. G., & Feldman, A. G. (1965). Functional tuning of the nervous system with control of movements or maintenance of a steady posture. Biophysics, 10, 925–935.

  3. Baillieul, J., & Martin, D. P. (1990). Resolution of kinematic redundancy. Proceedings of Symposia in Applied Mathematics, 41, 49–89.

  4. Balestrino, A., De Maria, G., & Sciavicco, L. (1984). Robust control of robotic manipulators. In Proceedings of the 9th IFAC world congress (Vol. 5, pp. 2435–2440).

  5. Bekey, G., & Goldberg, K. Y. (Eds.). (2012). Neural networks in robotics (Vol. 202). Berlin: Springer.

  6. Bernstein, N. (1935). The problem of the interrelationships between coordination and localization. Retrieved November 13th, 2015 from

  7. Bernstein, N. (1967). The coordination and regulation of movements. Oxford: Pergamon Press.

  8. Bhat, A. A., & Mohan, V. (2015). How iCub learns to imitate use of a tool quickly by recycling the past knowledge learnt during drawing. In Biomimetic and biohybrid systems (pp. 339–347). Berlin: Springer.

  9. Bizzi, E., & Polit, A. (1978). Processes controlling arm movements in monkeys. Science, 201, 1235–1237.

  10. Bryson, E. (1999). Dynamic optimization. Menlo Park, CA: Addison Wesley Longman.

  11. Buss, S. R., & Kim, J.-S. (2005). Selectively damped least squares for inverse kinematics. Journal of Graphics Tools, 10(3), 37–49.

  12. Cai, H., Werner, T., & Matas, J. (2013). Fast detection of multiple textureless 3-D objects. In Computer vision systems (pp. 103–112). Berlin: Springer.

  13. DARWIN D9.4. (2014). Deliverable D9.4: Third year demonstrators and evaluation report. EC FP7 project DARWIN Grant No. 270138. Retrieved November 10th, 2015 from

  14. DARWIN D9.5. (2015). Deliverable D9.5: Industrial assembly demonstrator and final evaluation. EC FP7 project DARWIN Grant No. 270138. Retrieved November 10th, 2015 from

  15. De Luca, A., & Oriolo, G. (1991). Issues in acceleration resolution of robot redundancy. In Third IFAC symposium on robot control (pp. 93–98).

  16. De Luca, A., Oriolo, G., & Siciliano, B. (1992). Robot redundancy resolution at the acceleration level. Laboratory Robotics and Automation, 4, 97–106.

  17. Featherstone, R. (1987). Robot Dynamics Algorithms. Dordrecht: Kluwer.

  18. Featherstone, R., & Khatib, O. (1997). Load independence of the dynamically consistent inverse of the Jacobian matrix. International Journal of Robotics Research, 16(2), 168–170.

  19. Flash, T., & Hogan, N. (1985). The coordination of arm movements: an experimentally confirmed mathematical model. Journal of Neuroscience, 5, 1688–1703.

  20. Frey, S. H., & Gerry, V. E. (2006). Modulation of neural activity during observational learning of actions and their sequential orders. Journal of Neuroscience, 26, 13194–13201.

  21. Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127–138.

  22. Friston, K. (2011). What is optimal about motor control? Neuron, 72(3), 488–498.

  23. Gallese, V., & Lakoff, G. (2005). The brain’s concepts: The role of the sensory-motor system in reason and language. Cognitive Neuropsychology, 22(3), 455–479.

  24. Gallese, V., & Sinigaglia, C. (2011). What is so special about Embodied Simulation. Trends in Cognitive Sciences, 15(11), 512–519.

  25. Grafton, S. T. (2009). Embodied cognition and the simulation of action to understand others. Annals of the New York Academy of Sciences, 1156, 97–117.

  26. Graziano, M. S. A., & Botvinick, M. M. (2002). How the brain represents the body: Insights from neurophysiology and psychology. In W. Prinz & B. Hommel (Eds.), Common mechanisms in perception and action: Attention and performance (pp. 136–157). Oxford: Oxford University Press.

  27. Guigon, E. (2011). Models and architectures for motor control: Simple or complex? In F. Danion & M. L. Latash (Eds.), Motor control (pp. 478–502). Oxford: Oxford University Press.

  28. Haggard, P., & Wolpert, D. M. (2005). Disorders of body schema. In H. J. Freund, M. Jeannerod, M. Hallett, & R. Leiguarda (Eds.), Higher-order motor disorders: From neuroanatomy and neurobiology to clinical neurology (pp. 261–271). Oxford: Oxford University Press.

  29. Head, H., & Holmes, G. (1911). Sensory disturbances in cerebral lesions. Brain, 34, 102–254.

  30. Hollerbach, J. M., & Suh, K. C. (1987). Redundancy resolution of manipulators through torque optimization. IEEE Journal of Robotics and Automation, 3(4), 308–316.

  31. Hsu, P., Hauser, J., & Sastry, S. (1989). Dynamic control of redundant manipulators. Journal of Robotic Systems, 6(2), 133–148.

  32. Iriki, A., Tanaka, M., & Iwamura, Y. (1996). Coding of modified body schema during tool use by macaque postcentral neurones. Neuroreport, 7, 2325–2330.

  33. Jordan, M. I. (1990). Motor learning and the degrees of freedom problem. In M. Jeannerod (Ed.), Attention and performance XIII. Hillsdale, NJ: Lawrence Erlbaum Associates Inc.

  34. Jordan, M. I., & Rumelhart, D. E. (1992). Forward models: Supervised learning with a distal teacher. Cognitive Science, 16(3), 307–354.

  35. Khatib, O. (1987). A unified approach for motion and force control of robot manipulators: The operational space formulation. IEEE Journal of Robotics and Automation, 3(1), 43–53.

  36. Khatib, O., et al. (2004). Human-centered robotics and interactive haptic simulation. International Journal of Robotics Research, 23(2), 167–478.

  37. Kranczioch, C., Mathews, S., Dean, J. A., & Sterr, A. (2009). On the equivalence of executed and imagined movements. Human Brain Mapping, 30, 3275–3286.

  38. Lashley, K. S. (1933). Integrative function of the cerebral cortex. Physiological Reviews, 13(1), 1–42.

  39. Lee, S., & Kil, R. M. (1990, June). Robot kinematic control based on bidirectional mapping neural network. In 1990 IJCNN international joint conference on neural networks, 1990 (pp. 327–335). New York: IEEE.

  40. Lewis, F. W., Jagannathan, S., & Yesildirak, A. (1998). Neural network control of robot manipulators and non-linear systems. Boca Raton: CRC Press.

  41. Li, S., Chen, S., Liu, B., Li, Y., & Liang, Y. (2012). Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks. Neurocomputing, 91, 1–10.

  42. Liégeois, A. (1977). Automatic supervisory control of the configuration and behavior of multibody mechanisms. IEEE Transactions on Systems, Man and Cybernetics, 7(12), 868–871.

  43. Lourakis, M., & Zabulis, X. (2013). Model-based pose estimation for rigid objects. In Computer vision systems (pp. 83–92). Berlin: Springer.

  44. Maravita, A., & Iriki, A. (2004). Tools for the body (schema). Trends in Cognitive Science, 8, 79–86.

  45. Mel, B. W. (1988). MURPHY: A robot that learns by doing. In Neural information processing systems (pp. 544–553).

  46. Mohan, V., & Morasso, P. (2011). Passive motion paradigm: An alternative to optimal control. Frontiers in Neurorobotics, 5, 4.

  47. Mohan, V., Morasso, P., Metta, G., & Sandini, G. (2009). A biomimetic, force-field based computational model for motion planning and bimanual coordination in humanoid robots. Autonomous Robots, 27, 291–301.

  48. Mohan, V., Morasso, P., Zenzeri, J., Metta, G., Chakravarthy, V. S., & Sandini, G. (2011). Teaching a humanoid robot to draw ‘Shapes’. Autonomous Robots, 31(1), 21–53.

  49. Mussa-Ivaldi, F. A., Morasso, P., & Zaccaria, R. (1988). Kinematic networks. A distributed model for representing and regularizing motor redundancy. Biological Cybernetics, 60, 1–16.

  50. Nakamura, Y., & Hanafusa, H. (1986). Inverse kinematics solutions with singularity robustness for robot manipulator control. Journal of Dynamic Systems, Measurement, and Control, 108, 163–171.

  51. Nakamura, Y., & Hanafusa, H. (1987). Optimal redundancy control of robot manipulators. International Journal of Robotics Research, 6(1), 32–42.

  52. Nakanishi, J., Cory, R., Mistry, M., Peters, J., & Schaal, S. (2008). Operational space control: A theoretical and empirical comparison. The International Journal of Robotics Research, 27(6), 737–757.

  53. Nguyen, L., Patel, R. V., & Khorasani, K. (1990, June). Neural network architectures for the forward kinematics problem in robotics. In 1990 IJCNN international joint conference on neural networks (pp. 393–399). New York: IEEE.

  54. Peters, J., & Schaal, S. (2008). Learning to control in operational space. The International Journal of Robotics Research, 27(2), 197–212.

  55. Pickering, M. J., & Clark, A. (2014). Getting ahead: Forward models and their role in cognitive architecture. Trends in Cognitive Sciences, 18(9), 451–456.

  56. Salaün, C., Padois, V., & Sigaud, O. (2009, October). Control of redundant robots using learned models: An operational space control approach. In IROS 2009 IEEE/RSJ international conference on intelligent robots and systems, 2009 (pp. 878–885). New York: IEEE.

  57. Scott, S. (2004). Optimal feedback control and the neural basis of volitional motor control. Nature Reviews Neuroscience, 5, 534–546.

  58. Senda, K. (1999). Quasioptimal control of space redundant manipulators. AIAA Guidance, Navigation, and Control Conference, 3, 1877–1885.

  59. Sentis, L., & Khatib, O. (2005). Synthesis of wholebody behaviors through hierarchical control of behavioral primitives. International Journal of Humanoid Robotics, 2(4), 505–518.

  60. Sevdalis, V., & Keller, P. E. (2011). Captured by motion: Dance, action understanding, and social cognition. Brain & Cognition, 77, 231–236.

  61. Todorov, E. (2006). Optimal control theory. In K. Doya, et al. (Eds.), Bayesian brain: Probabilistic approaches to neural coding (pp. 269–298). Cambridge, MA: MIT Press.

  62. Umiltà, M. A., Escola, L., Intskirveli, I., Grammont, F., Rochat, M., Caruana, F., et al. (2008). When pliers become fingers in the monkey motor system. Proceedings of the National Academy of Sciences of the United States of America, 105(6), 2209–13.

  63. Wampler, C. W. (1986). Manipulator inverse kinematic solutions based on vector formulations and damped least squares methods. IEEE Transaction on Systems, Man, and Cybernetics, 16, 93–101.

  64. Whitney, D. E. (1969). Resolved motion rate control of manipulators and human prostheses. IEEE Transactions on Man Machine Systems, 10(2), 47–53.

  65. Wolovich, W. A., & Elliot, H. (1984). A computational technique for inverse kinematics. In Proceedings of the 23rd IEEE conference on decision and control (pp. 1359–1363).

  66. Zak, M. (1991). Terminal chaos for information processing in neurodynamics. Biological Cybernetics, 64, 343–351.

Download references


This work presented in this article is supported by Robotics, Brain and Cognitive Sciences Department IIT, the EU FP7 Project DARWIN (, Grant No. FP7-270138) and US Dept. of Defense Grant (W911QY-12-C0078).

Author information

Correspondence to Ajaz Ahmad Bhat.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 19871 KB)

Supplementary material 2 (mp4 26014 KB)

Supplementary material 3 (mp4 21926 KB)

Supplementary material 1 (mp4 19871 KB)

Supplementary material 2 (mp4 26014 KB)

Supplementary material 3 (mp4 21926 KB)

Appendix: A neural implementation of time base generator

Appendix: A neural implementation of time base generator

A time base generator (TBG) is a scalar dynamical system in the normalized variable \(\xi \) given by:

$$\begin{aligned} \dot{\xi }= & {} \gamma (\xi (1-\xi ))^{\beta }\nonumber \\&\beta \in (0,1), \end{aligned}$$

where \(\xi (\hbox {t})\) is a smooth sigmoid from \(\xi (0) =0\) to \(\xi (t_f ) =1\), with a bell shaped velocity profile and desired finite movement duration \(t_f \). The system has two equilibrium points, an unstable one at \(\xi =0\) and a stable one at \(\xi =1\), consequently the system always approaches stably to \(\xi =1\). The time history of the TBG can be regulated using \(\beta \). The \(\gamma \) parameter has a dual function: controlling the convergence time and to reset the TBG and make it excitable for subsequent activation cycles. As regards to the exponent \(\beta \), it can be shown that the condition,Footnote 2 \(\beta >2/3\) is essential in order for the third derivative of \(\xi \hbox {(t)}\) (Jerk) to be defined at \(t=0\) and \(t= t_f\). Under these conditions, it can be seen that the dynamics of the system are Non Lipscitzian,Footnote 3 i.e. equilibrium configurations do not satisfy Lipschitz condition for ODE since \(|\partial \dot{\xi }/\partial \xi |\rightarrow \infty \). This implies that equilibrium point is a terminal attractor, and systems with terminal attractor dynamics always converge in finite time (Zak 1991).

To derive the convergence time, let us consider a simpler dynamical system:

$$\begin{aligned} \dot{\xi }= & {} \gamma \xi ^{\beta }.\\ t_f= & {} \int _0^{tf} {dt=\int _0^1 {\partial \xi /\gamma \xi ^{\beta }=1/\gamma (1-\beta )} } \end{aligned}$$

Once again we can see that equilibrium point is a terminal attractor as convergence time is always finite and can be precisely specified through the constant \(\gamma =1/t_f (1-\beta )\).

Remarkably, the above dynamical system can be approximated using a reciprocal inhibition network consisting of two neurons. A single neural element is an integrate-and fire neuron comprised of a multiplier, an integrator and a power function. In the integrate-and-fire model, input spikes are multiplied by their respective synaptic weights, summed and integrated over time. If the integral exceeds a threshold, the neuron fires and the integration restarts. The functionality in this case can be expressed as:

$$\begin{aligned} \dot{\xi }_i =\prod w_i \zeta _i \end{aligned}$$


$$\begin{aligned} {\zeta _i }=\xi _i ^{\beta } \end{aligned}$$

The reciprocal inhibition network of two neurons modeling the TBG is shown in Fig. 12.

Fig. 12

Reciprocal inhibition neural network for TBG

Dynamic behavior of the neuron can be written as

$$\begin{aligned}&\dot{\xi }_{1}=-\gamma {\xi }_{1}^{\beta }{\xi }_2^{\beta }=-\dot{\xi }_{2}\\&{\xi }_{1}(t)+ {\xi }_2 (t) =1\\&\therefore \dot{\xi }_{2}=\gamma {\xi }_1 ^{\beta }{\xi }_2 ^{\beta }=\gamma {\xi }_2 ^{\beta }(1-\xi _2 )^{\beta }=\gamma (\xi _2 (1-\xi _2 ))^{\beta } \end{aligned}$$

This is same as Eq. 5.

To perform any reaching movement, several joints—shoulder, elbow, wrist, fingers move cooperatively forming a synergy in a flexible and dynamic fashion. While groups of fingers may operate synergistically while playing a guitar chord, individual fingers are controlled while playing a lead. One of the basic problems of motor control is to understand how neural control structures quickly and flexibly organize and engage different parts of the body schema to cooperate synergistically in a movement sequence. The above TBG can be used to dynamically couple and decouple synergies in different ways based on task specification. In sum, by selecting two parameters of the TBG (\(t_f\) and \(\beta \)), a family of time-varying signals can be generated. From the point of view of real-time implementation, it is possible to use any scalar function of time satisfying the properties of described above or a look-up table etc.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bhat, A.A., Akkaladevi, S.C., Mohan, V. et al. Towards a learnt neural body schema for dexterous coordination of action in humanoid and industrial robots. Auton Robot 41, 945–966 (2017).

Download citation


  • Body schema
  • Passive motion paradigm
  • iCub
  • Motor control
  • Industrial assembly