Advertisement

Smart Interaction Device for Advanced Human Robotic Interface (SID)

  • Rodger PettittEmail author
  • Glenn TaylorEmail author
  • Linda R. ElliottEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10904)

Abstract

Robotic assets used by dismount Soldiers have usually been controlled through continuous and effortful tele-operation; however, more autonomous capabilities have been developed that reduce the need for continuous control of movements. While greater autonomy can make robotic systems more useful, users still need to interact with them, and operator control units (OCUs) for deployed robots still primarily rely on manual controllers such as joysticks. This report describes an evaluation of a multi-modal interface that leverages speech and gesture through a wrist worn device to enable an operator to direct a robotic vehicle using ground guide-inspired or infantry-inspired commands through voice or gesture. A smart watch is the primary interaction device, allowing for spoken input (through the microphone) and gesture input using single-arm gestures.

Keywords

Army robotic systems Ground robots Autonomous systems Gesture commands Speech commands 

References

  1. 1.
    Elliott, L., Hill, S.: A vision for future soldier-robot teams. Paper presented at the 7th International Conference on Applied Human Factors and Ergonomics 2016, Orlando (2016)Google Scholar
  2. 2.
    Pettitt, R., Elliott, L., Swiecicki, C.: Soldier-based concepts for squad level autonomous/intelligent robotic assets. In: Proceedings of the International HCI Human Computer Interface Conference, Vancouver, July 2017Google Scholar
  3. 3.
    Swiecicki, C., Elliott, L., Wooldridge, R.: Squad-level soldier-robot dynamics: exploring future concepts involving intelligent autonomous robots. Technical report No. 7215. Army Research Laboratory Human Research and Engineering Directorate, February 2015Google Scholar
  4. 4.
    Elliott, L., Skinner, A., Vice, J., Walker, A.: Utilizing glove-based gestures and a tactile vest display for covert communications and robotic control. Aberdeen Proving Grounds (MD); Army Research Laboratory (US); Report No. ARL-TR-6971, June 2014Google Scholar
  5. 5.
    Elliott, L., Hill, S., Barnes, M.: Gesture-based controls for robots: overview and implications for use by Soldiers. ARL Technical report 7715. Army Research Laboratory Human Research and Engineering Directorate (2016)Google Scholar
  6. 6.
    Baraniecki, L., Hartnett, G., Elliott, L., Pettitt, R., Vice, J., Riddle, K.: An intuitive wearable concept for robotic control. In: Yamamoto, S. (ed.) HIMI 2017 Part I. LNCS, vol. 10273, pp. 492–503. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58521-5_38CrossRefGoogle Scholar
  7. 7.
    Taylor, G., Quist, M., Lanting, M., Dunham, C., Muench, P.: Multi-modal interaction for robotic mules. Paper presented at the SPIE Defense and Security: Unmanned Systems Technology XIX, Anaheim, CA (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Army Research Laboratory, Fort BenningFort BenningUSA
  2. 2.Soar Technology, Inc.Ann ArborUSA

Personalised recommendations