Acquiring Accurate Human Responses to Robots’ Questions
- First Online:
- 146 Downloads
In task-oriented robot domains, a human is often designated as a supervisor to monitor the robot and correct its inferences about its state during execution. However, supervision is expensive in terms of human effort. Instead, we are interested in robots asking non-supervisors in the environment for state inference help. The challenge with asking non-supervisors for help is that they may not always understand the robot’s state or question and may respond inaccurately as a result. We identify four different types of state information that a robot can include to ground non-supervisors when it requests help—namely context around the robot, the inferred state prediction, prediction uncertainty, and feedback about the sensors used for the predicting the robot’s state. We contribute two wizard-of-oz’d user studies to test which combination of this state information increases the accuracy of non-supervisors’ responses. In the first study, we consider a block-construction task and use a toy robot to study questions regarding shape recognition. In the second study, we use our real mobile robot to study questions regarding localization. In both studies, we identify the same combination of information that increases the accuracy of responses the most. We validate that our combination results in more accurate responses than a combination that a set of HRI experts predicted would be best. Finally, we discuss the appropriateness of our found best combination of information to other task-driven robots.
KeywordsHuman-robot interaction Asking for help User studies
Unable to display preview. Download preview PDF.