Adaptive Interface Mapping for Intuitive Teleoperation of Multi-DOF Robots
The supervisory control of multi-DOF robots by humans is a demanding application. If a single operator is tasked with direct control of a humanoid robot, performing coordinated tasks becomes non-intuitive and corresponds to unsustainable mental loads even for the most skilled operators. In this paper we use reinforcement learning to adaptively change the interface mapping from the operator user interface to the robot in such a way as to reduce the associated operator mental load. Based on the results of the interaction with the robot, we change the dynamical map describing the relationship between user commands and robot actions. The contribution of this paper is the adaptation of the interface map using reinforcement learning with reward functions associated with quantitative performance metrics. We present promising experimental results showing that the use of the proposed scheme can result in an easier to use interface map for a multi-DOF assistive robot controlled via a brain activity sensor.
KeywordsHumanoid Robot Reward Function Interface Mapping Interface Device Reinforcement Learning Approach
Unable to display preview. Download preview PDF.
- 2.Drury, J., Scholtz, J., Yanco, H.: Awareness in human-robot interactions. In: IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 912–918 (October 2003)Google Scholar
- 4.Bechar, A., Edan, Y., Meyer, J.: An objective function for performance measurement of human-robot target recognition systems in unstructured environments. In: 2004 IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 118–123 (October 2004)Google Scholar
- 5.Maida, J.C., Bowen, C.K., Pace, J.: Improving robotic operator performance using augmented reality. In: Human Factors and Ergonomics Society Annual Meeting Proceedings, September 1, vol. 51, pp. 1635–1639(5) (2007)Google Scholar
- 6.Goodrich, M., Olsen Jr., D.R.: Seven principles of efficient human robot interaction. In: IEEE International Conference on Systems, Man and Cybernetics, vol. 4, pp. 3942–3948 (October 2003)Google Scholar
- 7.Saleh, J., Karray, F.: Towards generalized performance metrics for human-robot interaction. In: 2010 International Conference on Autonomous and Intelligent Systems (AIS), pp. 1–6 (June 2010)Google Scholar
- 8.Itoh, K., Miwa, H., Nukariya, Y., Zecca, M., Takanobu, H., Roccella, S., Carrozza, M., Dario, P., Takanishi, A.: Development of a bioinstrumentation system in the interaction between a human and a robot. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2620–2625 (October 2006)Google Scholar
- 9.Iturrate, I., Montesano, L., Minguez, J.: Robot reinforcement learning using eeg-based reward signals. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 4822–4829 (May 2010)Google Scholar
- 10.Theodorou, E., Buchli, J., Schaal, S.: Reinforcement learning of motor skills in high dimensions: A path integral approach. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2397–2403 (May 2010)Google Scholar
- 11.Hester, T., Quinlan, M., Stone, P.: Generalized model learning for reinforcement learning on a humanoid robot. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2369–2374 (May 2010)Google Scholar
- 12.Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 1st edn. MIT Press, Cambridge (1998)Google Scholar
- 13.Kanajar, P., Ranatunga, I., Rajruangrabin, J., Popa, D.O., Makedon, F.: Neptune: assistive robotic system for children with motor impairments. In: Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2011, pp. 59:1–59:6. ACM, New York (2011)CrossRefGoogle Scholar