Muscle Excitation Estimation in Biomechanical Simulation Using NAF Reinforcement Learning

  • Amir H. AbdiEmail author
  • Pramit Saha
  • Venkata Praneeth Srungarapu
  • Sidney Fels
Conference paper


Motor control is a set of time-varying muscle excitations which generate desired motions for a biomechanical system. Muscle excitations cannot be directly measured from live subjects. An alternative approach is to estimate muscle activations using inverse motion-driven simulation. In this article, we propose a deep reinforcement learning method to estimate the muscle excitations in simulated biomechanical systems. Here, we introduce a custom-made reward function which incentivizes faster point-to-point tracking of target motion. Moreover, we deploy two new techniques, namely episode-based hard update and dual buffer experience replay, to avoid feedback training loops. The proposed method is tested in four simulated 2D and 3D environments with 6–24 axial muscles. The results show that the models were able to learn muscle excitations for given motions after nearly 100,000 simulated steps. Moreover, the root mean square error in point-to-point reaching of the target across experiments was less than 1% of the length of the domain of motion. Our reinforcement learning method is far from the conventional dynamic approaches as the muscle control is derived functionally by a set of distributed neurons. This can open paths for neural activity interpretation of this phenomenon.


Deep reinforcement learning Muscle excitation Inverse motion-driven simulation Normalized advantage function Deep Q-network 


  1. 1.
    Berniker M, Kording KP (2015) Deep networks for motor control functions. Front Comput Neurosci 9:35CrossRefGoogle Scholar
  2. 2.
    Broad A (2011) Generating muscle driven arm movements using reinforcement learning. Master’s thesis, Washington University in St. LouisGoogle Scholar
  3. 3.
    Chollet F et al (2015) Keras.
  4. 4.
    Erdemir A et al (2007) Model-based estimation of muscle forces exerted during movements. Clin Biomech 22(2):131–154CrossRefGoogle Scholar
  5. 5.
    Gu S et al (2016) Continuous deep Q-learning with model-based acceleration. Cogn Process 12(4):319–340Google Scholar
  6. 6.
    Izawa J et al (2004) Biological arm motion through reinforcement learning. Biol Cybern 91(1):10–22CrossRefGoogle Scholar
  7. 7.
    Khan N, Stavness I (2017) Prediction of muscle activations for reaching movements using deep neural networks. In: 41st Annual meeting of the American Society of Biomechanics, BoulderGoogle Scholar
  8. 8.
    Lloyd J et al (2012) ArtiSynth: a fast interactive biomechanical modeling toolkit combining multibody and finite element simulation. In: Payan Y (ed) Soft tissue biomechanical modeling for computer assisted surgery, vol 11, chap. 126. Springer, Berlin, pp 355–394Google Scholar
  9. 9.
    Mnih V et al (2013) Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602Google Scholar
  10. 10.
    Pileicikiene G et al (2007) A three-dimensional model of the human masticatory system, including the mandible, the dentition and the temporomandibular joints. Stomatologija Balt Dent Maxillofac J 9(1):27–32Google Scholar
  11. 11.
  12. 12.
    Ravera EP et al (2016) Estimation of muscle forces in gait using a simulation of the electromyographic activity and numerical optimization. Comput Methods Biomech Biomed Engin 19(1):1–12MathSciNetCrossRefGoogle Scholar
  13. 13.
    Sutton RS et al (1998) Reinforcement learning: an introduction. MIT Press, CambridgezbMATHGoogle Scholar
  14. 14.
    Van Hasselt H et al (2016) Deep reinforcement learning with double q-learning. In: Thirtieth AAAI conference on artificial intelligence, vol 16, pp 2094–2100Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Amir H. Abdi
    • 1
    Email author
  • Pramit Saha
    • 1
  • Venkata Praneeth Srungarapu
    • 1
  • Sidney Fels
    • 1
  1. 1.Electrical and Computer Engineering DepartmentUniversity of British ColumbiaVancouverCanada

Personalised recommendations