Skip to main content
Log in

Head stabilization in a humanoid robot: models and implementations

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

Neuroscientific studies show that humans tend to stabilize their head orientation, while accomplishing a locomotor task. This is beneficial to image stabilization and in general to keep a reference frame for the body. In robotics, too, head stabilization during robot walking provides advantages in robot vision and gaze-guided locomotion. In order to obtain the head movement behaviors found in human walk, it is necessary and sufficient to be able to control the orientation (roll, pitch and yaw) of the head in space. Based on these principles, three controllers have been designed. We developed two classic robotic controllers, an inverse kinematics based controller, an inverse kinematics differential controller and a bio-inspired adaptive controller based on feedback error learning. The controllers use the inertial feedback from a IMU sensor and control neck joints in order to align the head orientation with the global orientation reference. We present the results for the head stabilization controllers, on two sets of experiments, validating the robustness of the proposed control methods. In particular, we focus our analysis on the effectiveness of the bio-inspired adaptive controller against the classic robotic controllers. The first set of experiments, tested on a simulated robot, focused on the controllers response to a set of disturbance frequencies and a step function. The other set of experiments were carried out on the SABIAN robot, where these controllers were implemented in conjunction with a model of the vestibulo-ocular reflex (VOR) and opto-kinetic reflex (OKR). Such a setup permits to compare the performances of the considered head stabilization controllers in conditions which mimic the human stabilization mechanisms composed of the joint effect of VOR, OKR and stabilization of the head. The results show that the bio-inspired adaptive controller is more beneficial for the stabilization of the head in tasks involving a sinusoidal torso disturbance, and it shows comparable performances to the inverse kinematics controller in case of the step response and the locomotion experiments conducted on the real robot.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Barnes, G. (1993). Visual–vestibular interaction in the control of head and eye movement: The role of visual feedback and predictive mechanisms. Progress in Neurobiology, 41(4), 435–472.

    Article  Google Scholar 

  • Beira, R., Lopes, M., Praga, M., Santos-Victor, J., Bernardino, A., Metta, G., et al. (2006). Design of the robot-cub (icub) head. In 2006 IEEE international conference on robotics and automation (ICRA) (pp. 94–100).

  • Benallegue, M., Laumond, J. P., & Berthoz, A. (2013). Contribution of actuated head and trunk to passive walkers stabilization. In 2013 IEEE international conference on robotics and automation (ICRA) (pp. 5638–5643).

  • Bernardin, D., Kadone, H., Bennequin, D., Sugar, T., Zaoui, M., & Berthoz, A. (2012). Gaze anticipation during human locomotion. Experimental Brain Research, 223(1), 65–78.

    Article  Google Scholar 

  • Berthoz, A. (2002). The brain’s sense of movement. Cambridge: Harvard University Press.

    Google Scholar 

  • Falotico, E., Laschi, C., Dario, P., Bernardin, D., & Berthoz, A. (2011). Using trunk compensation to model head stabilization during locomotion. In 2011 11th IEEE-RAS international conference on humanoid robots (Humanoids) (pp. 440–445).

  • Falotico, E., Cauli, N., Hashimoto, K., Kryczka, P., Takanishi, A., Dario, P., et al. (2012). Head stabilization based on a feedback error learning in a humanoid robot. In 2012 IEEE International conference on proceedings—IEEE international workshop on robot and human interactive communication (pp. 449–454).

  • Farkhatdinov, I., Hayward, V., Berthoz, & A. (2011). On the benefits of head stabilization with a view to control balance and locomotion in humanoids. In 2011 IEEE-RAS international conference on humanoid robots (Humanoids) (pp. 147–152).

  • Franchi, E., Falotico, E., Zambrano, D., Muscolo, G. G., Marazzato, L., Dario, P., et al. (2010). A comparison between two bio-inspired adaptive models of vestibulo-ocular reflex (VOR) implemented on the iCub robot. In 2010 10th IEEE-RAS international conference on humanoid robots (Humanoids) (pp. 251–256).

  • Franklin, G. F., Powell, J. D., & Emami-Naeini, A. (2002). Feedback control of dynamic systems. Upper Saddle River: Prentice Hall.

    MATH  Google Scholar 

  • Grasso, R., Prévost, P., Ivanenko, Y. P., & Berthoz, A. (1998). Eye–head coordination for the steering of locomotion in humans: An anticipatory synergy. Neuroscience Letters, 253(2), 115–118.

    Article  Google Scholar 

  • Hashimoto. K., Kang, H. J., Nakamura, M., Falotico, E., Lim, H. O., Takanishi, A., et al. (2012). Realization of biped walking on soft ground with stabilization control based on gait analysis. In 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 2064–2069).

  • Hicheur, H., Vieilledent, S., & Berthoz, A. (2005). Head motion in humans alternating between straight and curved walking path: Combination of stabilizing and anticipatory orienting mechanisms. Neuroscience Letters, 383(1), 87–92.

    Article  Google Scholar 

  • Hirasaki, E., Moore, S. T., Raphan, T., & Cohen, B. (1999). Effects of walking velocity on vertical head and body movements during locomotion. Experimental Brain Besearch, 127(2), 117–130.

    Google Scholar 

  • Imai, T., Moore, S. T., Raphan, T., & Cohen, B. (2001). Interaction of the body, head, and eyes during walking and turning. Experimental Brain Research, 136(1), 1–18.

    Article  Google Scholar 

  • Kadone, H., Bernardin, D., Bennequin, D., & Berthoz, A. (2010). Gaze anticipation during human locomotion—Top–down organization that may invert the concept of locomotion in humanoid robots. In 2010 IEEE international conference on international symposium in robot and human interactive communication (pp. 552–557).

  • Kang, H. J., Hashimoto, K., Nishikawa, K., Falotico, E., Lim, H. O., Takanishi, A., et al. (2012). Biped walking stabilization on soft ground based on gait analysis. In: 2012 4th IEEE RAS EMBS international conference on biomedical robotics and biomechatronics (BioRob) (pp. 669–674).

  • Kavanagh, J., Barrett, R., & Morrison, S. (2006). The role of the neck and trunk in facilitating head stability during walking. Experimental Brain Research, 172(4), 454–463.

    Article  Google Scholar 

  • Kawato, M. (1990). Feedback-error-learning neural network for supervised motor learning. Advanced Neural Computers, 6(3), 365–372.

    Article  Google Scholar 

  • Kryczka, P., Falotico, E., Hashimoto, K., Lim, H., Takanishi, A., Laschi, C., et al. (2012a). Implementation of a human model for head stabilization on a humanoid platform. In 2012 4th IEEE RAS EMBS international conference on biomedical robotics and biomechatronics (BioRob) (pp. 675–680).

  • Kryczka, P., Falotico, E., Hashimoto, K., Lim, H. O., Takanishi, A., Laschi, C., et al. (2012b). A robotic implementation of a bio-inspired head motion stabilization model on a humanoid platform. In 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 2076–2081).

  • MacLellan, M. J., & Patla, A. E. (2006a). Adaptations of walking pattern on a compliant surface to regulate dynamic stability. Experimental Brain Research, 173(3), 521–530.

    Article  Google Scholar 

  • MacLellan, M. J., & Patla, A. E. (2006b). Stepping over an obstacle on a compliant travel surface reveals adaptive and maladaptive changes in locomotion patterns. Experimental Brain Research, 173(3), 531–538.

    Article  Google Scholar 

  • Marcinkiewicz, M., Kaushik, R., Labutov, I., Parsons, S., & Raphan, T. (2009). Learning to stabilize the head of a quadrupedal robot with an artificial vestibular system. In 2009 IEEE international conference on robotics and automation (ICRA) (pp. 2512–2517).

  • Moore, S. T., Hirasaki, E., Cohen, B., & Raphan, T. (1999). Effect of viewing distance on the generation of vertical eye movements during locomotion. Experimental Brain Research, 129(3), 347–361.

    Article  Google Scholar 

  • Moore, S. T., Hirasaki, E., Raphan, T., & Cohen, B. (2001). The human vestibulo-ocular reflex during linear locomotion. Annals of the New York Academy of Sciences, 942(1), 139–147.

    Article  Google Scholar 

  • Ogura, Y., Aikawa, H., Shimomura, K., Morishima, A., Lim, H. O., & Takanishi, A. (2006). Development of a new humanoid robot wabian-2. In 2006 IEEE international conference on robotics and automation (ICRA) (pp. 76–81).

  • Oliveira, M., Santos, C. P., Costa, L., Rocha, A., & Ferreira, M. (2011). Head motion stabilization during quadruped robot locomotion: Combining CPGs and stochastic optimization methods. International Journal of Natural Computing Research (IJNCR), 2(1), 39–62.

    Article  Google Scholar 

  • Porrill, J., Dean, P., & Stone, J. V. (2004). Recurrent cerebellar architecture solves the motor-error problem. Proceedings of the Royal Society of London-B, 271(1541), 789–796.

    Article  Google Scholar 

  • Pozzo, T., Berthoz, A., & Lefort, L. (1990). Head stabilization during various locomotor tasks in humans. Experimental Brain Research, 82(1), 97–106.

    Article  Google Scholar 

  • Pozzo, T., Berthoz, A., Lefort, L., & Vitte, E. (1991). Head stabilization during various locomotor tasks in humans. Experimental Brain Research, 85(1), 208–217.

    Article  Google Scholar 

  • Santos, C., Oliveira, M., Rocha, A. M. A., & Costa, L. (2009). Head motion stabilization during quadruped robot locomotion: Combining dynamical systems and a genetic algorithm. In 2009 IEEE international conference on robotics and automation (ICRA) (pp. 2294–2299).

  • Schweigart, G., Mergner, T., Evdokimidis, I., Morand, S., & Becker, W. (1997). Gaze stabilization by optokinetic reflex (OKR) and vestibulo-ocular reflex (VOR) during active head rotation in man. Vision Research, 37(12), 1643–1652.

    Article  Google Scholar 

  • Shibata, T., & Schaal, S. (2001). Biomimetic gaze stabilization based on feedback-error-learning with nonparametric regression networks. Neural Networks, 14(2), 201–216.

    Article  Google Scholar 

  • Shibata, T., Vijayakumar, S., Conradt, J., & Schaal, S. (2001). Biomimetic oculomotor control. Adaptive Behavior, 9(3–4), 189–207.

    Article  Google Scholar 

  • Sreenivasa, M. N., Souères, P., Laumond, J. P., & Berthoz, A. (2009). Steering a humanoid robot by its head. In 2009 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE (pp. 4451–4456).

  • Taiana, M., Santos, J., Gaspar, J., Nascimento, J., Bernardino, A., & Lima, P. (2010). Tracking objects with generic calibrated sensors: An algorithm based on color and 3d shape features. Robotics and Autonomous Systems, 58(6), 784–795.

    Article  Google Scholar 

  • Viollet, S., & Franceschini, N. (2005). A high speed gaze control system based on the vestibulo-ocular reflex. Robotics and Autonomous Systems, 50(4), 147–161.

    Article  Google Scholar 

  • Yamada, H., Mori, M., & Hirose, S. (2007). Stabilization of the head of an undulating snake-like robot. In 2007 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 3566–3571).

Download references

Acknowledgments

This work is supported by European Commission in the ICT STREP RoboSoM Project under Contract No. 248366. The authors would like to thank the Italian Ministry of Foreign Affairs, General Directorate for the Promotion of the “Country System”, Bilateral and Multilateral Scientific and Technological Cooperation Unit, for the support through the Joint Laboratory on Biorobotics Engineering Project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Egidio Falotico.

Appendix 1: A model of gaze stabilization

Appendix 1: A model of gaze stabilization

Fig. 10
figure 10

A model of gaze stabilization. This model aims to reproduce the joint effect of VCR for the head stabilization and the VOR and OKR for the image stabilization (Shibata and Schaal 2001; Shibata et al. 2001). We added a module named Head Pointing in order to guarantee the head contributes to center the target compensating during locomotion for the head translations. The model takes as input the target position in the camera image and computes the neck and eye joints position (or velocity)

In this appendix we present a complete gaze stabilization model that we used to run the IJ, IK and FEL controllers on the real robot. In humans one of the objectives of the head stabilization mechanism is to keep the image stable on the retina. Vision is degraded if an image slips on the retina, so stabilizing images is an essential task during everyday activities. Among the different mechanisms used to stabilize vision, the vestibulo-ocular reflex (VOR) is certainly the most important. The VOR compensates for head movements that would perturb vision by turning the eye in the orbit in the opposite direction of the head movements (Barnes 1993). VOR works in conjunction with the opto-kinetic response, which is a feedback mechanism that ensures that the eye moves in the same direction and at almost the same speed as an image. Together, VOR and OKR keep the image stationary on the retina, with VOR compensating for fast movements and OKR for slower ones (Schweigart et al. 1997).

Several approaches have been used to model the VOR depending on the goal of the study. For our purpose we need a bio-inspired model of image stabilization through eye movements suitable for a robotic implementation. In the robotic literature we found some controllers inspired by the VOR implemented on a humanoid platform (Viollet and Franceschini 2005; Porrill et al. 2004; Shibata and Schaal 2001), but only the Schall’s model replicates also the OKR mechanism. In particular it investigates the cooperation between these two ocular movements. OKR receives a sensory input (the retinal slip), which can be used as a positional error signal, and its goal is to keep the image still on the retina. VOR uses instead, as sensory input, the head velocity signal (acquired by the vestibular organs in the semicircular canals), inverts the sign of the measured head velocity and, with the help of a well-tuned feedforward controller, rapidly generates the appropriate motor commands for the eyes. In our implementation we use as input for the model the head rotation and the visual error on the camera image. To achieve appropriate VOR–OKR performance, the authors of the model synthesize the VOR system as a feedforward open-loop controller using an inverse control model. The OKR is defined instead as a compensatory negative feedback controller for the VOR, involving the PD controller based on retinal slip. These two systems form what is called the direct pathway of oculomotor control in biology. According to Shibata and Schaal (2001) and Shibata et al. (2001), to accomplish excellent VOR and OKR performance, it is necessary to introduce an indirect pathway. It corresponds to a learning network located in the primate cerebellum. It acquires during the course of learning an inverse dynamic model of the oculomotor plant. The learning controller takes as input the head velocity and the estimated position of the oculomotor plant and outputs necessary torque. It is trained with the FEL (feedback-error-learning) strategy. For successful FEL, the time alignment between input signals and feedback-error signal is theoretically crucial. To solve this problem the authors suggest the concept of eligibility traces, and model them as a second order linear filter of the input signals to the learning system.

The model of gaze stabilization we propose (see Fig. 10) takes the input from the camera image, computes the visual error (i.e. the distance from the center of the image) and the 3D position of the target and sends these values to a module of head pointing and to the VOR/OKR controller. The objective of the Head Pointing module is to center the target in the camera image controlling the neck joints. This allows also to compensate for the translations of the head in locomotion tasks. The output of this module is the reference of the head stabilization controller. The Head Pointing module computes the joints angles to point the target and convert them in an angular rotation of the head. In order to calculate the head inverse kinematics to point a tracked object, a Feedforward Multilayer Perceptron network has been implemented. The network has one hidden layer of 20 units. It takes as input the object 3D positions in the left eye reference frame \((x^t, y^t, z^t)\) and as output the neck pitch and yaw joints angles. The network was trained offline in a simulated environment (Matlab SIMULINK). A 18,000 element random dataset, obtained using the direct kinematics, was used to train the network. The dataset outputs were created choosing 18,000 random values (from \(-\pi /4\) to \(\pi /4)\) for the neck joints. From these values we computed the head rotation through the direct kinematics function to generate the angular rotation of the head which is the output of the Head Pointing module. The dataset inputs were the 3D position of the object obtained using the following two steps:

  • for each output head rotation, the roto-traslation matrix was calculated from the eye to the neck reference frame using Denavit–Hartenberg method

  • the forward kinematics matrix was multiplied by the vector E(0, 0, z), where the z was a random value between 100 and 5000 mm

The dataset was divided in training (70 %), validation (15 %) and test set (15 %). After 634 epochs, the test MSE error was around 0.0003 rad. For the robot experiments a trained network has been implemented in C++. The Head stabilization taking as input the reference \((\nu ^{r}, \phi ^{r}, \psi ^{r})\) from the Head Pointing yields joint velocities or positions (depending on the controller). The VOR module takes as input the encoder value of the eye joints \((\vartheta _e)\) and the head rotation for the pitch or the yaw and produces the corresponding compensatory position movement of the eye joints \((\vartheta _e^u)\) along the same axis (eye vergence for the horizontal plane and eye tilt for the vertical plane).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Falotico, E., Cauli, N., Kryczka, P. et al. Head stabilization in a humanoid robot: models and implementations. Auton Robot 41, 349–365 (2017). https://doi.org/10.1007/s10514-016-9583-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-016-9583-z

Keywords

Navigation