Combining Static and Runtime Methods to Achieve Safe Standing-Up for Humanoid Robots

  • Francesco Leofante
  • Simone Vuotto
  • Erika Ábrahám
  • Armando TacchellaEmail author
  • Nils Jansen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9952)


Due to its complexity, the standing-up task for robots is highly challenging, and often implemented by scripting the strategy that the robot should execute per hand. In this paper we aim at improving the approach of a scripted stand-up strategy by making it more stable and safe. To achieve this aim, we apply both static and runtime methods by integrating reinforcement learning, static analysis and runtime monitoring techniques.


Model Check Reinforcement Learning Action Space Goal State Humanoid Robot 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Stückler, J., Schwenk, J., Behnke, S.: Getting back on two feet: reliable standing-up routines for a humanoid robot. In: Proceedings of the IAS-9, pp. 676–685. IOS Press (2006)Google Scholar
  2. 2.
    Morimoto, J., Doya, K.: Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning. Robot. Auton. Syst. 36(1), 37–51 (2001)CrossRefzbMATHGoogle Scholar
  3. 3.
    Morimoto, J., Doya, K.: Reinforcement learning of dynamic motor sequence: learning to stand up. In: Proceedings of the 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, pp. 1721–1726 (1998)Google Scholar
  4. 4.
    Schuitema, E., Wisse, M., Ramakers, T., Jonker, P.: The design of LEO: a 2D bipedal walking robot for online autonomous reinforcement learning. In: Proceedings of the IROS 2010, pp. 3238–3243 (2010)Google Scholar
  5. 5.
    Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning. MIT Press, Cambridge (1998)Google Scholar
  6. 6.
    Pathak, S., Ábrahám, E., Jansen, N., Tacchella, A., Katoen, J.-P.: A greedy approach for the efficient repair of stochastic models. In: Havelund, K., Holzmann, G., Joshi, R. (eds.) NFM 2015. LNCS, vol. 9058, pp. 295–309. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-17524-9_21 Google Scholar
  7. 7.
    Baier, C., Katoen, J.P.: Principles of Model Checking. The MIT Press, Cambridge (2008)zbMATHGoogle Scholar
  8. 8.
    Hahn, E.M., Hermanns, H., Zhang, L.: Probabilistic reachability for parametric Markov models. Softw. Tools Technol. Transf. 13(1), 3–19 (2010)CrossRefGoogle Scholar
  9. 9.
    Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-22110-1_47 CrossRefGoogle Scholar
  10. 10.
    van Otterlo, M., Wiering, M.: Reinforcement learning and Markov decision processes. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning, vol. 12, pp. 3–42. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Bartocci, E., Grosu, R., Katsaros, P., Ramakrishnan, C.R., Smolka, S.A.: Model repair for probabilistic systems. In: Abdulla, P.A., Leino, K.R.M. (eds.) TACAS 2011. LNCS, vol. 6605, pp. 326–340. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-19835-9_30 CrossRefGoogle Scholar
  12. 12.
    Chen, T., Hahn, E.M., Han, T., Kwiatkowska, M., Qu, H., Zhang, L.: Model repair for Markov decision processes. In: Proceedings of the TASE 2013, pp. 85–92. IEEE (2013)Google Scholar
  13. 13.
    Bartocci, E., Bortolussi, L., Nenzi, L., Sanguinetti, G.: On the robustness of temporal properties for stochastic models. In: Proceedings of the HSB 2013. EPTCS, vol. 125, pp. 3–19 (2013)Google Scholar
  14. 14.
    Katoen, J.P., Zapreev, I.S., Hahn, E.M., Hermanns, H., Jansen, D.N.: The ins and outs of the probabilistic model checker MRMC. Perform. Eval. 68(2), 90–104 (2011)CrossRefGoogle Scholar
  15. 15.
    Bioloid premium kit. Accessed 3 July 2016
  16. 16.
    Dynamixel actuators. Accessed 3 July 2016
  17. 17.
    Bioloid URDF model. Accessed 3 July 2016
  18. 18.
    Rohmer, E., Singh, S.P.N., Freese, M.: V-REP: a versatile and scalable robot simulation framework. In: Proceedings of the IROS 2013, pp. 1321–1326 (2013)Google Scholar
  19. 19.
    Bentley, J.L.: Multidimensional binary search trees used for associative searching. Commun. ACM 18(9), 509–517 (1975)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Schaul, T., Bayer, J., Wierstra, D., Sun, Y., Felder, M., Sehnke, F., Rückstieß, T., Schmidhuber, J.: PyBrain. J. Mach. Learn. Res. 11, 743–746 (2010)Google Scholar
  21. 21.
    Defazio, A., Graepel, T.: A comparison of learning algorithms on the arcade learning environment. arXiv preprint arXiv:1410.8620 (2014)
  22. 22.
    Lange, S., Gabel, T., Riedmiller, M.: Batch reinforcement learning. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning, pp. 45–73. Springer, Heidelberg (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Francesco Leofante
    • 1
  • Simone Vuotto
    • 1
    • 2
  • Erika Ábrahám
    • 2
  • Armando Tacchella
    • 1
    Email author
  • Nils Jansen
    • 3
  1. 1.University of GenoaGenoaItaly
  2. 2.RWTH Aachen UniversityAachenGermany
  3. 3.University of Texas at AustinAustinUSA

Personalised recommendations