Advertisement

Guiding Quadrotor Landing with Pointing Gestures

  • Boris GromovEmail author
  • Luca Gambardella
  • Alessandro Giusti
Conference paper
  • 28 Downloads
Part of the Springer Proceedings in Advanced Robotics book series (SPAR, volume 12)

Abstract

We present a system which allows an operator to land a quadrotor on a precise spot in its proximity by only using pointing gestures; the system has very limited requirements in terms of robot capabilities, relies on an unobtrusive bracelet-like device worn by the operator, and depends on proven, field-ready technologies. During the interaction, the robot continuously provides feedback by controlling its position in real time: such feedback has a fundamental role in mitigating sensing inaccuracies and improving user experience. We report a user study where our approach compares well with a standard joystick-based controller in terms of intuitiveness (amount of training required), landing spot accuracy, and efficiency.

Keywords

Pointing gestures Human-robot interaction Quadrotor landing 

Notes

Acknowledgments

This work was partially supported by the Swiss National Science Foundation (SNSF) through the National Centre of Competence in Research (NCCR) Robotics.

References

  1. 1.
    Abidi, S., Williams, M., Johnston, B.: Human pointing as a robot directive. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 67–68 (2013)Google Scholar
  2. 2.
    Bolt, R.A.: “Put-that-there”: voice and gesture at the graphics interface. In: Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH 1980, pp. 262–270 (1980)Google Scholar
  3. 3.
    Broggini, D., Gromov, B., Gambardella, L.M., Giusti, A.: Learning to detect pointing gestures from wearable IMUs. In: Proceedings of Thirty-Second AAAI Conference on Artificial Intelligence, USA, 2018. AAAI Press (2018)Google Scholar
  4. 4.
    Brooks, A.G., Breazeal, C.: Working with robots and objects: revisiting deictic reference for achieving spatial common ground, pp. 297–304. Gesture (2006)Google Scholar
  5. 5.
    Cosgun, A., Trevor, A.J.B., Christensen, H.I.: Did you mean this object? Detecting ambiguity in pointing gesture targets. In: HRI 2015 Towards a Framework for Joint Action Workshop (2015)Google Scholar
  6. 6.
    Droeschel, D., Stückler, J., Behnke, S.: Learning to interpret pointing gestures with a time-of-flight camera. In: Proceedings of the 6th International Conference on Human-Robot Interaction - HRI 2011, pp. 481–488 (2011)Google Scholar
  7. 7.
    Faessler, M., Mueggler, E., Schwabe, K., Scaramuzza, D.: A monocular pose estimation system based on infrared leds. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 907–913. IEEE (2014)Google Scholar
  8. 8.
    Fiala, M.: Designing highly reliable fiducial markers. IEEE Trans. Pattern Anal. Mach. Intell. 32(7), 1317–1324 (2010)CrossRefGoogle Scholar
  9. 9.
    Forster, C., Faessler, M., Fontana, F., Werlberger, M., Scaramuzza, D.: Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 111–118 (2015)Google Scholar
  10. 10.
    Gromov, B., Abbate, G., Gambardella, L., Giusti, A.: Proximity human-robot interaction using pointing gestures and a wrist-mounted IMU. In: 2019 IEEE International Conference on Robotics and Automation (ICRA), pp. 8084–8091 (2019)Google Scholar
  11. 11.
    Gromov, B., Gambardella, L., Giusti, A.: Robot identification and localization with pointing gestures. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3921–3928 (2018)Google Scholar
  12. 12.
    Gromov, B., Gambardella, L.M., Di Caro, G.A.: Wearable multi-modal interface for human multi-robot interaction. In: 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 240–245 (2016)Google Scholar
  13. 13.
    Herbort, O., Kunde, W.: Spatial (mis-) interpretation of pointing gestures to distal spatial referents. J. Exp. Psychol.: Hum. Percept. Perform. 42(1), 78–89 (2016)Google Scholar
  14. 14.
    Jevtić, A., Doisy, G., Parmet, Y., Edan, Y.: Comparison of interaction modalities for mobile indoor robot guidance: direct physical interaction, person following, and pointing control. IEEE Trans. Hum. Mach. Syst. 45(6), 653–663 (2015)CrossRefGoogle Scholar
  15. 15.
    Mayer, S., Wolf, K., Schneegass, S., Henze, N.: Modeling distant pointing for compensating systematic displacements. In: Proceedings of the ACM CHI 2015 Conference on Human Factors in Computing Systems, vol. 1, pp. 4165–4168 (2015)Google Scholar
  16. 16.
    Nagi, J., Giusti, A., Gambardella, L.M., Di Caro, G.A.: Human-swarm interaction using spatial gestures. In: IEEE International Conference on Intelligent Robots and Systems, pp. 3834–3841 (2014)Google Scholar
  17. 17.
    Ng, W.S., Sharlin, E.: Collocated interaction with flying robots. In: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, pp. 143–149 (2011)Google Scholar
  18. 18.
    Nickel, K., Stiefelhagen, R.: Visual recognition of pointing gestures for human-robot interaction. Image Vis. Comput. 25(12), 1875–1884 (2007)CrossRefGoogle Scholar
  19. 19.
    Pourmehr, S., Monajjemi, V., Wawerla, J., Vaughan, R., Mori, G.: A robust integrated system for selecting and commanding multiple mobile robots. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 2874–2879 (2013)Google Scholar
  20. 20.
    Sanna, A., Lamberti, F., Paravati, G., Manuri, F.: A Kinect-based natural interface for quadrotor control. Entertainment Comput. 4(3), 179–186 (2013)CrossRefGoogle Scholar
  21. 21.
    Scherer, S., Chamberlain, L., Singh, S.: First results in autonomous landing and obstacle avoidance by a full-scale helicopter. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 951–956 (2012)Google Scholar
  22. 22.
    Suarez, J., Murphy, R.R.: Hand gesture recognition with depth images: a review. In: RO-MAN 2012, pp. 411–417. IEEE (2012)Google Scholar
  23. 23.
    Sugiyama, J., Miura, J.: A wearable visuo-inertial interface for humanoid robot control. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 235–236. IEEE (2013)Google Scholar
  24. 24.
    Taylor, J.L., McCloskey, D.: Pointing. Beh. Brain Res. 29(1–2), 1–5 (1988)CrossRefGoogle Scholar
  25. 25.
    Van den Bergh, M., et al.: Real-time 3D hand gesture interaction with a robot for understanding directions from humans. In: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, pp. 357–362 (2011)Google Scholar
  26. 26.
    Wolf, M.T., Assad, C., Vernacchia, M.T., Fromm, J., Jethani, H.L.: Gesture-based robot control with variable autonomy from the JPL BioSleeve. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 1160–1165 (2013)Google Scholar
  27. 27.
    Zivkovic, Z., et al.: Toward low latency gesture control using smart camera network. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2008, pp. 1–8. IEEE (2008)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Boris Gromov
    • 1
    Email author
  • Luca Gambardella
    • 1
  • Alessandro Giusti
    • 1
  1. 1.Dalle Molle Institute for Artificial Intelligence (IDSIA), USI/SUPSILuganoSwitzerland

Personalised recommendations