Reducing Uncertainty in Human-Robot Interaction: A Cost Analysis Approach

Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 79)

Abstract

We present a technique for robust human-robot interaction taking into consideration uncertainty in input and task execution costs incurred by the robot. Specifically, this research aims to quantitatively model confirmation feedback, as required by a robot while communicating with a human operator to perform a particular task. Our goal is to model human-robot interaction from the perspective of risk minimization, taking into account errors in communication, “risk” involved in performing the required task, and task execution costs. Given an input modality with non-trivial uncertainty, we calculate the cost associated with performing the task specified by the user, and if deemed necessary, ask the user for confirmation. The estimated task cost and the uncertainty measure is given as input to a Decision Function, the output of which is then used to decide whether to execute the task, or request clarification from the user. In cases where the cost or uncertainty (or both) is estimated to be exceedingly high by the system, task execution is deferred until a significant reduction in the output of the Decision Function is achieved. We test our system through human-interface experiments, based on a framework custom designed for our family of amphibious robots, and demonstrate the utility of the framework in the presence of large task costs and uncertainties.We also present qualitative results of our algorithm from field trials of our robots in both open- and closed-water environments.

Keywords

Human Operator User Study Decision Function Gesture Recognition Task Execution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dudek, G., Sattar, J., Xu, A.: A visual language for robot control and programming: A human-interface study. In: Proceedings of the International Conference on Robotics and Automation, ICRA, Rome, Italy, pp. 2507–2513 (April 2007)Google Scholar
  2. 2.
    Dudek, G., Jenkin, M., Prahacs, C., Hogue, A., Sattar, J., Giguère, P., German, A., Liu, H., Saunderson, S., Ripsman, A., Simhon, S., Torres-Mendez, L.A., Milios, E., Zhang, P., Rekleitis, I.: A visually guided swimming robot. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Edmonton, Alberta, Canada, pp. 3604–3609 (August 2005)Google Scholar
  3. 3.
    Sattar, J., Giguère, P., Dudek, G., Prahacs, C.: A visual servoing system for an aquatic swimming robot. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Edmonton, Alberta, Canada, pp. 1483–1488 (August 2005)Google Scholar
  4. 4.
    Dunbabin, M., Vasilescu, I., Corke, P., Rus, D.: Data muling over underwater wireless sensor networks using an autonomous underwater vehicle. In: International Conference on Robotics and Automation, ICRA 2006, Orlando, Florida (May 2006)Google Scholar
  5. 5.
    Waldherr, S., Thrun, S., Romero, R.: A gesture-based interface for human-robot interaction. Autonomous Robots 9(2), 151–173 (2000)CrossRefGoogle Scholar
  6. 6.
    Tsotsos, J.K., Verghese, G., Dickinson, S., Jenkin, M., Jepson, A., Milios, E., Nuflo, F., Stevenson, S., Black, M., Metaxas, D., Culhane, S., Ye, Y., Mannn, R.: PLAYBOT: A visually-guided robot for physically disabled children. Image Vision Computing 16(4), 275–292 (1998)CrossRefGoogle Scholar
  7. 7.
    Rybski, P.E., Voyles, R.M.: Interactive task training of a mobile robot through human gesture recognition. In: IEEE International Conference on Robotics and Automation, vol. 1, pp. 664–669 (1999)Google Scholar
  8. 8.
    Poupyrev, I., Kato, H., Billinghurst, M.: ARToolkit User Manual Version 2.33, Human Interface Technology Lab, University of Washington, Seattle, Washington (2000)Google Scholar
  9. 9.
    Sattar, J., Bourque, E., Giguère, P., Dudek, G.: Fourier tags: Smoothly degradable fiducial markers for use in human-robot interaction. In: Proceedings of the Fourth Canadian Conference on Computer and Robot Vision, Montréal, QC, Canada, pp. 165–174 (May 2007)Google Scholar
  10. 10.
    Kortenkamp, D., Huber, E., Bonasso, P.: Recognizing and interpreting gestures on a mobile robot. In: 13th National Conference on Artifical Intelligence (1996)Google Scholar
  11. 11.
    Skubic, M., Perzanowski, D., Blisard, S., Schultz, A., Adams, W., Bugajska, M., Brock, D.: Spatial language for human-robot dialogs. IEEE Transactions on Systems, Man and Cybernetics, Part C 34(2), 154–167 (2004)CrossRefGoogle Scholar
  12. 12.
    Erenshteyn, R., Laskov, P., Foulds, R., Messing, L., Stern, G.: Recognition approach to gesture language understanding. In: 13th International Conference on Pattern Recognition, vol. 3, pp. 431–435 (August 1996)Google Scholar
  13. 13.
    Pavlovic, V., Sharma, R., Huang, T.S.: Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 677–695 (1997)CrossRefGoogle Scholar
  14. 14.
    Derpanis, K.G., Wildes, R.P., Tsotsos, J.K.: Hand gesture recognition within a linguistics-based framework. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004, Part I. LNCS, vol. 3021, pp. 282–296. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  15. 15.
    Pateras, C., Dudek, G., Mori, R.D.: Understanding referring expressions in a person-machine spoken dialogue. In: International Conference on Acoustics, Speech, and Signal Processing, ICASSP, vol. 1 (1995)Google Scholar
  16. 16.
    Montemerlo, M., Pineau, J., Roy, N., Thrun, S., Verma, V.: Experiences with a mobile robotic guide for the elderly. In: Proceedings of the 18th National Conference on Artificial Intelligence AAAI, pp. 587–592 (2002)Google Scholar
  17. 17.
    Doshi, F., Pineau, J., Roy, N.: Reinforcement learning with limited reinforcement: Using Bayes risk for active learning in POMDPs. In: Proceedings of the 25th International Conference on Machine Learning, pp. 256–263. ACM, New York (2008)CrossRefGoogle Scholar
  18. 18.
    Doshi, F., Roy, N.: Spoken language interaction with model uncertainty: an adaptive human-robot interaction system. Connection Science 20(4), 299–318 (2008)CrossRefGoogle Scholar
  19. 19.
    Krebsbach, K., Olawsky, D., Gini, M.: An empirical study of sensing and defaulting in planning. In: Artificial Intelligence Planning Systems: Proceedings of the First International Conference, College Park, Maryland, June 15-17, p. 136. Morgan Kaufmann Pub. (1992)Google Scholar
  20. 20.
    Kulic, D., Croft, E.: Safe planning for human-robot interaction. In: Proceedings of 2004 IEEE International Conference on Robotics and Automation, ICRA 2004, vol. 2 (2004)Google Scholar
  21. 21.
    Rabiner, L., et al.: A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2), 257–286 (1989)CrossRefGoogle Scholar
  22. 22.
    Baum, L.E., Petrie, T., Soules, G., Weiss, N.: A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. In: The Annals of Mathematical Statistics, pp. 164–171 (1970)Google Scholar
  23. 23.
    Fiala, M.: Artag, a fiducial marker system using digital techniques. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 590–596. IEEE Computer Society, Washington, DC (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.School of Computer ScienceMcGill UniversityMontréalCanada

Personalised recommendations