Advertisement

Modelling Visual Communication with UAS

  • Alexander SchelleEmail author
  • Peter Stütz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9991)

Abstract

This work presents a communication concept for vision based interaction with airborne UAS. Unlike previous approaches, this research focuses on high level mission tasking of UAS without having to rely on radio data link. The paper provides the overall concept design and focuses on communication via gestures. A respective model describing the gestural syntax for high level commands as well as a feedback mechanism to enable bidirectional human-machine communication for different operational modes is presented in detail. First real world experiments evaluate the feasibility of the deployed sensors for the intended purpose.

Keywords

Human-Machine-Interaction Gesture-based commanding UAS 

References

  1. 1.
    Venetsky, L., Tieman, J.W.: Robotic gesture recognition system, 20 October 2009Google Scholar
  2. 2.
    Pfeil, K., Koh, S.L., LaViola, J.: Exploring 3D gesture metaphors for interaction with unmanned aerial vehicles. In: Proceedings of the 2013 International Conference on Intelligent User Interfaces, pp. 257–266 (2013)Google Scholar
  3. 3.
    Wagner, P.K., Peres, S.M., Madeo, R.C.B., de Moraes Lima, C.A., de Almeida Freitas, F.: Gesture unit segmentation using spatial-temporal information and machine learning. In: FLAIRS Conference (2014)Google Scholar
  4. 4.
    Monajjemi, V.M., Wawerla, J., Vaughan, R., Mori, G.: HRI in the sky: creating and commanding teams of UAVs with a vision-mediated gestural interface. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 617–623 (2013)Google Scholar
  5. 5.
    Nagi, J., Giusti, A., Di Caro, G.A., Gambardella, L.M.: HRI in the sky: controlling UAVs using face poses and hand gestures. In: HRI, pp. 252–253 (2014)Google Scholar
  6. 6.
    Vanetsky, L., Husni, M., Yager, M.: Gesture recognition for UCAV-N flight deck operations: problem definition final report, Naval Air Systems Command, January 2003Google Scholar
  7. 7.
    Cicirelli, G., Attolico, C., Guaragnella, C., D’Orazio, T.: A kinect-based gesture recognition approach for a natural human robot interface. Int. J. Adv. Robot. Syst. 12, 22 (2015) Google Scholar
  8. 8.
    McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)Google Scholar
  9. 9.
    Bressem, J., Ladewig, S.H.: Rethinking gesture phases: articulatory features of gestural movement? Semiotica 2011(184), 53–91 (2011)CrossRefGoogle Scholar
  10. 10.
    Kendon, A.: Gesticulation and speech: two aspects of the process of utterance. Relatsh. Verbal Nonverbal Commun. 25, 207–227 (1980)Google Scholar
  11. 11.
    Fricke, E.: Grammatik Multimodal: Wie Wörter und Gesten Zusammenwirken. Walter De Gruyter Incorporated, Boston (2012)CrossRefGoogle Scholar
  12. 12.
    Kranstedt, A., Kühnlein, P., Wachsmuth, I.: Deixis in multimodal human computer interaction: an interdisciplinary approach. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 112–123. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  13. 13.
    Monajjemi, M., Bruce, J., Sadat, S.A., Wawerla, J., Vaughan, R.: UAV, do you see me? Establishing mutual attention between an uninstrumented human and an outdoor UAV in flight. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3614–3620 (2015)Google Scholar
  14. 14.
    Anjum, M.L., Ahmad, O., Rosa, S., Yin, J., Bona, B.: Skeleton tracking based complex human activity recognition using kinect camera. In: Beetz, M., Johnston, B., Williams, M.-A. (eds.) ICSR 2014. LNCS, vol. 8755, pp. 23–33. Springer, Heidelberg (2014)Google Scholar
  15. 15.
    Verschae, R., Ruiz-del-Solar, J.: Object detection: current and future directions. Front. Robot. AI 2, 1475 (2015)CrossRefGoogle Scholar
  16. 16.
    Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004)CrossRefGoogle Scholar
  17. 17.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 886–893 (2005)Google Scholar
  18. 18.
    Danelljan, M., Häger, G., Shahbaz Khan, F., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference, p. 65.1 (2014)Google Scholar
  19. 19.
    King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)Google Scholar
  20. 20.
    Schwarz, L.A., Mkhitaryan, A., Mateus, D., Navab, N.: Estimating human 3D pose from time-of-flight images based on geodesic distances and optical flow. In: 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, pp. 700–706 (2011)Google Scholar
  21. 21.
    Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Institute of Flight SystemsUniversity of the Bundeswehr MunichNeubibergGermany

Personalised recommendations