International Journal of Social Robotics

, Volume 4, Issue 2, pp 117–129 | Cite as

Acquiring Accurate Human Responses to Robots’ Questions

  • Stephanie RosenthalEmail author
  • Manuela Veloso
  • Anind K. Dey


In task-oriented robot domains, a human is often designated as a supervisor to monitor the robot and correct its inferences about its state during execution. However, supervision is expensive in terms of human effort. Instead, we are interested in robots asking non-supervisors in the environment for state inference help. The challenge with asking non-supervisors for help is that they may not always understand the robot’s state or question and may respond inaccurately as a result. We identify four different types of state information that a robot can include to ground non-supervisors when it requests help—namely context around the robot, the inferred state prediction, prediction uncertainty, and feedback about the sensors used for the predicting the robot’s state. We contribute two wizard-of-oz’d user studies to test which combination of this state information increases the accuracy of non-supervisors’ responses. In the first study, we consider a block-construction task and use a toy robot to study questions regarding shape recognition. In the second study, we use our real mobile robot to study questions regarding localization. In both studies, we identify the same combination of information that increases the accuracy of responses the most. We validate that our combination results in more accurate responses than a combination that a set of HRI experts predicted would be best. Finally, we discuss the appropriateness of our found best combination of information to other task-driven robots.


Human-robot interaction Asking for help User studies 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    von Ahn L, Dabbish L (2004) Labeling images with a computer game. ACM conference on human factors in computing systems. In: CHI 2004, pp 319–326 CrossRefGoogle Scholar
  2. 2.
    Antifakos S, Schwaninger A, Schiele B (2004) Evaluating the effects of displaying uncertainty in context-aware applications. In: UbiComp 2004, pp 54–69 CrossRefGoogle Scholar
  3. 3.
    Argall B, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Auton Syst 57(5):469–483 CrossRefGoogle Scholar
  4. 4.
    Asoh H, Hayamizu S, Hara I, Motomura Y, Akaho S, Matsui T (1997) Socially embedded learning of the office-conversant mobile robot jijo-2. In: 15th international joint conference on artificial intelligence, pp 880–885 Google Scholar
  5. 5.
    Asoh H, Motomura Y, Hara I, Akaho S, Hayamizu S, Matsui T (1996) Acquiring a probabilistic map with dialogue-based learning. In: Proceedings of ROBOLEARN-96, pp 11–18 Google Scholar
  6. 6.
    Banbury S, Seldcon S, Endsley M, Gordon T, Tatlock K (1998) Being certain about uncertainty: How the representation of system reliability affects pilot decision making. In: Human factors and ergonomics society 42nd annual meeting Google Scholar
  7. 7.
    Bellotti V, Edwards K (2001) Intelligibility and accountability: human considerations in context-aware systems. Hum-Comput Interact 16(2):193–212 CrossRefGoogle Scholar
  8. 8.
    Biswas J, Veloso M (2010) Wifi localization and navigation for autonomous indoor mobile robots. In: ICRA 2010, pp 4379–4384 Google Scholar
  9. 9.
    Clark H (2008) Talking as if. In: HRI’08: proceedings of the 3rd ACM/IEEE international conference on human robot interaction, pp 393–394 CrossRefGoogle Scholar
  10. 10.
    Clark H, Wilkes-Gibbs D (1986) Referring as a collaborative process. Cognition 22:1–39 CrossRefGoogle Scholar
  11. 11.
    Cohn D, Atlas L, Ladner R (1994) Improving generalization with active learning. Mach Learn 15(2):201–221 Google Scholar
  12. 12.
    Culotta A, Kristjansson T, McCallum A, Viola P (2006) Corrective feedback and persistent learning for information extraction. Artif Intell 170(14–15):1101–1122 MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Eagle M, Leiter E (1964) Recall and recognition in intentional and incidental learning. J Exp Psychol 68:58–63 CrossRefGoogle Scholar
  14. 14.
    Erickson T, Kellogg WA (2001) Social translucence: an approach to designing systems that support social processes. ACM Trans Comput-Hum Interact 7(1):59–83 CrossRefGoogle Scholar
  15. 15.
    Faulring A, Myers B, Mohnkern K, Schmerl B, Steinfeld A, Zimmerman J, Smailagic A, Hansen J, Siewiorek D (2010) Agent-assisted task management that reduces email overload. In: IUI’10: proceeding of the 14th international conference on intelligent user interfaces, pp 61–70 Google Scholar
  16. 16.
    Fong TW, Thorpe C, Baur C (2003) Robot, asker of questions. Robot Auton Syst 42(3–4):235–243 zbMATHCrossRefGoogle Scholar
  17. 17.
    Green P, Wei-Haas L (1985) The rapid development of user interfaces: Experience with the wizard of oz method. Hum Factors Ergon Soc Annu Meet 29(5):470–474 CrossRefGoogle Scholar
  18. 18.
    Horvitz E (1999) Principles of mixed-initiative user interfaces. In: CHI’99: proceedings of the SIGCHI conference on Human factors in computing systems, pp 159–166 Google Scholar
  19. 19.
    Lee MK, Kielser S, Forlizzi J, Srinivasa S, Rybski P (2010) Gracefully mitigating breakdowns in robotic services. In: HRI’10: 5th ACM/IEEE international conference on human robot interaction, pp 203–210 CrossRefGoogle Scholar
  20. 20.
    Mankoff J, Abowd G, Hudson S (2000) Oops: a toolkit supporting mediation techniques for resolving ambiguity in recognition-based interfaces. Comput Graph 24(6):819–834 CrossRefGoogle Scholar
  21. 21.
    Mcnee S, Lam SK, Guetzlaff C, Konstan JA, Riedl J (2003) Confidence displays and training in recommender systems. In: Proceedings of the 9th IFIP TC13 international conference on humancomputer interaction (INTERACT). IOS Press, Amsterdam, pp 176–183 Google Scholar
  22. 22.
    Mitchell T (1997) Machine learning. McGraw Hill, New York zbMATHGoogle Scholar
  23. 23.
    Raghavan H, Madani O, Jones R (2006) Active learning with feedback on features and instances. J Mach Learn Res 7:1655–1686 MathSciNetzbMATHGoogle Scholar
  24. 24.
    Rosenthal S, Biswas J, Veloso M (2010) An effective personal mobile robot agent through a symbiotic human-robot interaction. In: AAMAS’10: 9th international joint conference on autonomous agents and multiagent systems, pp 915–922 Google Scholar
  25. 25.
    Rosenthal S, Dey AK, Veloso M (2009) How robots’ questions affect the accuracy of the human responses. In: The international symposium on robot-human interactive communication, pp 1137–1142 Google Scholar
  26. 26.
    Rosenthal S, Veloso M, Dey AK (2011) Is someone in this office available to help me? proactively seeking help from spatially-situated humans. Journal of Intelligent and Robotic Systems pp. 1–17 Google Scholar
  27. 27.
    Scaffidi C (2009) Topes: Enabling end-user programmers to validate and reformat data. Carnegie Mellon Technical Report CMU-ISR-09-105 Google Scholar
  28. 28.
    Shadbolt N, Burton AM (1989) The empirical study of knowledge elicitation techniques. SIGART Bull 108:15–18 CrossRefGoogle Scholar
  29. 29.
    Shilman M, Tan DS, Simard P (2006) Cuetip: a mixed-initiative interface for correcting handwriting errors. In: UIST’06: Proceedings of the 19th annual ACM symposium on user interface software and technology, pp 323–332 CrossRefGoogle Scholar
  30. 30.
    Shiomi M, Sakamoto D, Takayuki K, Ishi CT, Ishiguro H, Hagita N (2008) A semi-autonomous communication robot: a field trial at a train station. In: HRI’08: 3rd ACM/IEEE international conference on human robot interaction, pp 303–310 CrossRefGoogle Scholar
  31. 31.
    Stumpf S, Rajaram V, Li L, Burnett M, Dietterich T, Sullivan E, Drummond R, Herlocker J (2007) Toward harnessing user feedback for machine learning. In: IUI’07: proceedings of the 12th international conference on intelligent user interfaces, pp 82–91 CrossRefGoogle Scholar
  32. 32.
    Stumpf S, Sullivan E, Fitzhenry E, Oberst I, Wong W, Burnett M (2008) Integrating rich user feedback into intelligent user interfaces. In: IUI’08: proceedings of the 13th international conference on Intelligent user interfaces, pp 50–59 CrossRefGoogle Scholar
  33. 33.
    Weiss A, Igelsböck J, Tscheligi M, Bauer A, Kühnlenz K, Wollherr D, Buss M (2010) Robots asking for directions: the willingness of passers-by to support robots. In: HRI’10: 5th ACM/IEEE international conference on human robot interaction, pp 23–30 CrossRefGoogle Scholar
  34. 34.
    Yanco H, Drury JL, Scholtz J (2004) Beyond usability evaluation: analysis of human-robot interaction at a major robotics competition. Hum-Comput Interact 19(1):117–149 CrossRefGoogle Scholar

Copyright information

© Springer Science & Business Media BV 2012

Authors and Affiliations

  • Stephanie Rosenthal
    • 1
    Email author
  • Manuela Veloso
    • 1
  • Anind K. Dey
    • 2
  1. 1.Computer Science DepartmentCarnegie Mellon UniversityPittsburgUSA
  2. 2.Human-Computer Interaction InstituteCarnegie Mellon UniversityPittsburgUSA

Personalised recommendations