A Neurophysiological Examination of Multi-robot Control During NASA’s Extreme Environment Mission Operations Project
Previous research has explored the use of an external or “3rd person” view in the context of augmented reality, video gaming, and robot control. Few studies, however, involve the use of mobile robot to provide that viewpoint, and fewer still do so in dynamic, unstructured, high stress environments. This study examined the cognitive state of robot operators performing complex search and rescue tasks in a simulated crisis scenario. A solo robot control paradigm was compared with a dual condition in which an alternate (surrogate) perspective was provided via voice commands to a second robot employed as a highly autonomous teammate. Subjective and neurophysiological measurements indicate an increased level of situational awareness was achieved in the dual condition along with a reduction in workload and decision oriented task engagement. These results are discussed in the context of mitigation potential for cognitive overload in complex and unstructured task environments.
KeywordsHuman robot interaction Cognitive state Situational awareness Workload Decision making Robot assisted rescue
This work was sponsored by the Warfighter Interface Division of the 711th Human Performance Wing at the Air Force Research Laboratory. The author would like to extend an especially warm and profound expression of gratitude to Bill Todd and Jason Poffenberger from NASA/JSC for their outstanding support, as well as Ethan Blackford, Jeff Bolles, and James Christensen for their tremendous prowess in handling complex data collection and participant management issues under daunting conditions and an extremely tight schedule.
- 1.Salamin, P., Thalmann, D., Vexo, F.: The benefits of third-person perspective in virtual and augmented reality?. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM (2006)Google Scholar
- 2.Milgram, P., Rastogi, A., Grodski, J.J.: Telerobotic control using augmented reality. In: Robot and Human Communication, Proceedings 4th IEEE International Workshop on RO-MAN’95 TOKYO. IEEE (1995)Google Scholar
- 3.Okura, F., et al.: Teleoperation of mobile robots by generating augmented free-viewpoint images. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2013)Google Scholar
- 4.Leeper, A.E., et al.: Strategies for human-in-the-loop robotic grasping. In: Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. ACM (2013)Google Scholar
- 5.Hashimoto, S., et al.: Touchme: an augmented reality based remote robot manipulation. In: Proceedings of ICAT2011 21st International Conference on Artificial Reality and Telexistence. (2011)Google Scholar
- 6.Blitch, J.G., et al.: Correlations of spatial orientation with simulation based robot operator training. In: 4th International Conference on Applied Human Factors and Ergonomics (AHFE). San Francisco CA (2012)Google Scholar
- 7.Adams, J.A., Kaymaz-Keskinpala, H.: Analysis of perceived workload when using a PDA for mobile robot teleoperation. In: Proceedings IEEE International Conference on Robotics and Automation ICRA’04. IEEE (2004)Google Scholar
- 10.Scerbo, M.: Adaptive automation. In: Parasuraman, R., Rizzo, M. (eds.) Neuroergonomics: The Brain at Work, pp. 239–252. Oxford University Press, 198 Madison Ave, 10016, New York NY. (2007) Google Scholar
- 14.Adams, S., Kane, R., Bates, R.: Validation of the China Lake Situational Awareness Scale with 3D SART and S-CAT. Naval Air Warfare Center Weapons Division (452330D), China Lake, CA (1998)Google Scholar
- 16.Newton, P., Bristoll, H.: Psychometric Success Spatial Ability Practice Test 1, pp. 1–12 (2010)Google Scholar