A Feasibility Study for Validating Robot Actions Using EEG-Based Error-Related Potentials
Validating human–robot interaction can be a challenging task, especially in cases in which the robot designer is interested in the assessment of individual robot actions within an ongoing interaction that should not be interrupted by intermittent surveys. In this paper, we propose a neuro-based method for real-time quantitative assessment of robot actions. The method encompasses the decoding of error-related potentials (ErrPs) from the electroencephalogram (EEG) of a human during interaction with a robot, which could be a useful and intuitive complement to existing methods for validating human–robot interaction in the future. To demonstrate usability, we conducted a study in which we examined EEG-based ErrPs in response to a humanoid robot displaying semantically incorrect actions in a simplistic HRI task. Furthermore, we conducted a procedurally identical control experiment with computer screen-based symbolic cursor action. The results of our study confirmed decodeability of ErrPs in response to incorrect robot actions with an average accuracy of \(69.0\pm 7.9\%\) across 11 subjects. Cross-comparisons of ErrPs between experimental tasks revealed high temporal and topographical similarity, but more distinct signals in response to the cursor action and, as a result, better decodeability with a mean accuracy of \(90.6\pm 3.9\%\). This demonstrated that ErrPs can be sensitive to the stimulus eliciting them despite procedurally identical protocols. Re-using ErrP-decoders across experimental tasks without re-calibration is accompanied by significant performance losses and therefore not recommended. Overall, the outcomes of our study confirm feasibility of ErrP-decoding for human–robot validation, but also highlight challenges to overcome in order to enhance usability of the proposed method.
KeywordsElectroencephalography (EEG) Passive brain–computer interface (BCI) Error-related potentials (ErrP) Event-related potentials (ERP) Error monitoring Human–robot interaction (HRI)
We thank Ana Alves-Pinto and Sae Franklin for helpful comments on revision and editing of the manuscript. We thank the anonymous reviewers for their detailed comments and references, which have led to significant clarification of the work in this paper. This research was partially supported by Deutsche Forschungsgemeinschaft (DFG) through the International Graduate School of Science and Engineering (IGSSE) at the Technical University of Munich (TUM).
Compliance with Ethical Standards
Conflict of interest
The authors declare no conflict of interest nor competing financial interests.
This work was approved by the ethics commission of the Faculty of Medicine, Technische Universität München (TUM) under the Reference Number 236/15s.
Consent to participate and publish was obtained from the participants in verbal and written form.
Availability of Data and Material
Data and material was not made publicly available but can be obtained from the corresponding author.
- 3.Bartneck C, Croft E, Kulic D (2008) Measuring the anthropomorphism, animacy, likeability, perceived intelligence and perceived safety of robots. In: Metrics for HRI workshop, technical report, Citeseer, vol 471, pp 37–44Google Scholar
- 10.Dautenhahn K, Woods S, Kaouri C, Walters ML, Koay KL, Werry I (2005) What is a robot companion-friend, assistant or butler? In: 2005 IEEE/RSJ international conference on intelligent robots and systems, 2005 (IROS 2005). IEEE, pp 1192–1197Google Scholar
- 12.Ehrlich S, Cheng G (2016) A neuro-based method for detecting context-dependent erroneous robot action. In: 2016 IEEE-RAS 16th international conference on humanoid robots (humanoids). IEEE, pp 477–482Google Scholar
- 14.Ehrlich S, Alves-Pinto A, Lampe R, Cheng G (2017) A simple and practical sensorimotor EEG device for recording in patients with special needs. In: Neurotechnix2017, CogNeuroEng 2017Google Scholar
- 17.Ferrez PW, Millán JdR (2005) You are wrong!—automatic detection of interaction errors from brain waves. In: Proceedings of the 19th international joint conference on artificial intelligence, EPFL-CONF-83269Google Scholar
- 23.Gouaillier D, Hugel V, Blazevic P, Kilner C, Monceaux J, Lafourcade P, Marnier B, Serre J, Maisonnier B (2008) The nao humanoid: a combination of performance and affordability. CoRR arXiv:abs/08073223
- 28.Huang CM, Mutlu B (2012) Robot behavior toolkit: generating effective social behaviors for robots. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction. ACM, pp 25–32Google Scholar
- 29.Huang CM, Cakmak M, Mutlu B (2015) Adaptive coordination strategies for human–robot handovers. In: Robotics: science and systemsGoogle Scholar
- 30.Iturrate I, Montesano L, Minguez J (2010) Single trial recognition of error-related potentials during observation of robot operation. In: 2010 annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 4181–4184Google Scholar
- 32.Iturrate I, Chavarriaga R, Montesano L, Minguez J, Millán JR (2015a) Teaching brain–machine interfaces as an alternative paradigm to neuroprosthetics control. Sci Rep 5(13):893Google Scholar
- 39.Lotte F, Guan C (2010) Learning from other subjects helps reducing brain–computer interface calibration time. In: 2010 IEEE international conference on acoustics speech and signal processing (ICASSP). IEEE, pp 614–617Google Scholar
- 41.Mutlu B, Forlizzi J (2008) Robots in organizations: the role of workflow, social, and environmental factors in human–robot interaction. In: Proceedings of the 3rd ACM/IEEE international conference on human robot interaction. ACM, pp 287–294Google Scholar
- 42.Mutlu B, Shiwa T, Kanda T, Ishiguro H, Hagita N (2009) Footing in human–robot conversations: how robots might shape participant roles using gaze cues. In: Proceedings of the 4th ACM/IEEE international conference on human robot interaction. ACM, pp 61–68Google Scholar
- 46.Salazar-Gomez AF, DelPreto J, Gil S, Guenther FH, Rus D (2017) Correcting robot mistakes in real time using EEG signals. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE, pp 6570–6577Google Scholar
- 51.Spüler M, Niethammer C (2015) Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity. Front Hum Neurosci 9:155Google Scholar
- 53.Szafir D, Mutlu B (2012) Pay attention!: designing adaptive agents that monitor and improve user engagement. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 11–20Google Scholar
- 55.Welke D, Behncke J, Hader M, Schirrmeister RT, Schönau A, Eßmann B, Müller O, Burgard W, Ball T (2017) Brain responses during robot-error observation. ArXiv preprint arXiv:170801465