Adaptive Interface Mapping for Intuitive Teleoperation of Multi-DOF Robots

  • Jartuwat Rajruangrabin
  • Isura Ranatunga
  • Dan O. Popa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7429)


The supervisory control of multi-DOF robots by humans is a demanding application. If a single operator is tasked with direct control of a humanoid robot, performing coordinated tasks becomes non-intuitive and corresponds to unsustainable mental loads even for the most skilled operators. In this paper we use reinforcement learning to adaptively change the interface mapping from the operator user interface to the robot in such a way as to reduce the associated operator mental load. Based on the results of the interaction with the robot, we change the dynamical map describing the relationship between user commands and robot actions. The contribution of this paper is the adaptation of the interface map using reinforcement learning with reward functions associated with quantitative performance metrics. We present promising experimental results showing that the use of the proposed scheme can result in an easier to use interface map for a multi-DOF assistive robot controlled via a brain activity sensor.


Humanoid Robot Reward Function Interface Mapping Interface Device Reinforcement Learning Approach 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., Goodrich, M.: Common metrics for human-robot interaction. In: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, HRI 2006, pp. 33–40. ACM, New York (2006)CrossRefGoogle Scholar
  2. 2.
    Drury, J., Scholtz, J., Yanco, H.: Awareness in human-robot interactions. In: IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 912–918 (October 2003)Google Scholar
  3. 3.
    Donmez, B., Pina, P.E., Cummings, M.L.: Evaluation criteria for human-automation performance metrics. In: Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems, PerMIS 2008, pp. 77–82. ACM, New York (2008)CrossRefGoogle Scholar
  4. 4.
    Bechar, A., Edan, Y., Meyer, J.: An objective function for performance measurement of human-robot target recognition systems in unstructured environments. In: 2004 IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 118–123 (October 2004)Google Scholar
  5. 5.
    Maida, J.C., Bowen, C.K., Pace, J.: Improving robotic operator performance using augmented reality. In: Human Factors and Ergonomics Society Annual Meeting Proceedings, September 1, vol. 51, pp. 1635–1639(5) (2007)Google Scholar
  6. 6.
    Goodrich, M., Olsen Jr., D.R.: Seven principles of efficient human robot interaction. In: IEEE International Conference on Systems, Man and Cybernetics, vol. 4, pp. 3942–3948 (October 2003)Google Scholar
  7. 7.
    Saleh, J., Karray, F.: Towards generalized performance metrics for human-robot interaction. In: 2010 International Conference on Autonomous and Intelligent Systems (AIS), pp. 1–6 (June 2010)Google Scholar
  8. 8.
    Itoh, K., Miwa, H., Nukariya, Y., Zecca, M., Takanobu, H., Roccella, S., Carrozza, M., Dario, P., Takanishi, A.: Development of a bioinstrumentation system in the interaction between a human and a robot. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2620–2625 (October 2006)Google Scholar
  9. 9.
    Iturrate, I., Montesano, L., Minguez, J.: Robot reinforcement learning using eeg-based reward signals. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 4822–4829 (May 2010)Google Scholar
  10. 10.
    Theodorou, E., Buchli, J., Schaal, S.: Reinforcement learning of motor skills in high dimensions: A path integral approach. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2397–2403 (May 2010)Google Scholar
  11. 11.
    Hester, T., Quinlan, M., Stone, P.: Generalized model learning for reinforcement learning on a humanoid robot. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2369–2374 (May 2010)Google Scholar
  12. 12.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 1st edn. MIT Press, Cambridge (1998)Google Scholar
  13. 13.
    Kanajar, P., Ranatunga, I., Rajruangrabin, J., Popa, D.O., Makedon, F.: Neptune: assistive robotic system for children with motor impairments. In: Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2011, pp. 59:1–59:6. ACM, New York (2011)CrossRefGoogle Scholar
  14. 14.
    Rajruangrabin, J., Popa, D.: Robot head motion control with an emphasis on realism of neck-eye coordination during object tracking. Journal of Intelligent & Robotic Systems 63, 163–190 (2011), 10.1007/s10846-010-9468-xCrossRefGoogle Scholar
  15. 15.
    Setz, C., Arnrich, B., Schumm, J., La Marca, R., Troster, G., Ehlert, U.: Discriminating stress from cognitive load using a wearable eda device. IEEE Transactions on Information Technology in Biomedicine 14(2), 410–417 (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jartuwat Rajruangrabin
    • 1
  • Isura Ranatunga
    • 1
  • Dan O. Popa
    • 1
  1. 1.Department of Electrical EngineeringUniversity of Texas at ArlingtonArlingtonUSA

Personalised recommendations