Advertisement

Autonomous flying blimp interaction with human in an indoor space

  • Ning-shi Yao
  • Qiu-yang Tao
  • Wei-yu Liu
  • Zhen Liu
  • Ye Tian
  • Pei-yu Wang
  • Timothy Li
  • Fumin ZhangEmail author
Article
  • 2 Downloads

Abstract

We present the Georgia Tech Miniature Autonomous Blimp (GT-MAB), which is designed to support human-robot interaction experiments in an indoor space for up to two hours. GT-MAB is safe while flying in close proximity to humans. It is able to detect the face of a human subject, follow the human, and recognize hand gestures. GT-MAB employs a deep neural network based on the single shot multibox detector to jointly detect a human user’s face and hands in a real-time video stream collected by the onboard camera. A human-robot interaction procedure is designed and tested with various human users. The learning algorithms recognize two hand waving gestures. The human user does not need to wear any additional tracking device when interacting with the flying blimp. Vision-based feedback controllers are designed to control the blimp to follow the human and fly in one of two distinguishable patterns in response to each of the two hand gestures. The blimp communicates its intentions to the human user by displaying visual symbols. The collected experimental data show that the visual feedback from the blimp in reaction to the human user significantly improves the interactive experience between blimp and human. The demonstrated success of this procedure indicates that GT-MAB could serve as a flying robot that is able to collect human data safely in an indoor environment.

Key words

Robotic blimp Human-robot interaction Deep learning Face detection Gesture recognition 

CLC number

TP24 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Acharya U, Bevins A, Duncan BA, 2017. Investigation of human-robot comfort with a small unmanned aerial vehicle compared to a ground robot. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.2758–2765. https://doi.org/10.1109/IROS.2017.8206104Google Scholar
  2. Arroyo D, Lucho C, Roncal J, et al., 2014. Daedalus: a sUAV for human-robot interaction. Proc 9th ACM/IEEE Int Conf on Human-Robot Interaction, p.116–117.Google Scholar
  3. Birchfield S, 1996. KLT: an Implementation of the Kanade- Lucas-Tomasi Feature TrackerGoogle Scholar
  4. Burri M, Gasser L, Käch M, et al., 2013. Design and control of a spherical omnidirectional blimp. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.1873–1879. https://doi.org/10.1109/IROS.2013.6696604Google Scholar
  5. Cauchard JR, Zhai KY, Spadafora M, et al., 2016. Emotion encoding in human-drone interaction. Proc 11th ACM/IEEE Int Conf on Human-Robot Interaction, p.263–270. https://doi.org/10.1109/HRI.2016.7451761Google Scholar
  6. Cho S, Mishra V, Tao Q, et al., 2017. Autopilot design for a class of miniature autonomous blimps. Proc IEEE Conf on Control Technology and Applications, p.841–846. https://doi.org/10.1109/CCTA.2017.8062564Google Scholar
  7. Corke P, 2011. Robotics, Vision and Control: Fundamental Algorithms in MATLAB. Springer Berlin Germany.Google Scholar
  8. Costante G, Bellocchio E, Valigi P, et al., 2014. Personalizing vision-based gestural interfaces for HRI with UAVs: a transfer learning approach. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.3319–3326. https://doi.org/10.1109/IROS.2014.6943024Google Scholar
  9. de Crescenzio F, Miranda G, Persiani F, et al., 2009. A first implementation of an advanced 3D interface to control and supervise UAV (uninhabited aerial vehicles) missions. Presence, 18(3):171–184. https://doi.org/10.1162/pres.18.3.171CrossRefGoogle Scholar
  10. Draper M, Calhoun G, Ruff H, et al., 2003. Manual versus speech input for unmanned aerial vehicle control station operations. Proc Hum Factors Ergon Soc Ann Meet, 47(1):109–113. https://doi.org/10.1177/154193120304700123CrossRefGoogle Scholar
  11. Duffy BR, 2003. Anthropomorphism and the social robot. Rob Auton Syst, 42(3-4):177–190. https://doi.org/10.1016/S0921-8890(02)00374-3CrossRefzbMATHGoogle Scholar
  12. Duncan BA, Murphy RR, 2013. Comfortable approach distance with small unmanned aerial vehicles. Proc IEEE RO-MAN, p.786–792. https://doi.org/10.1109/ROMAN.2013.6628409Google Scholar
  13. Goodrich MA, Schultz AC, 2007. Human-robot interaction: a survey. Found Trends Hum-Comput Interact, 1(3):203–275. https://doi.org/10.1561/1100000005CrossRefzbMATHGoogle Scholar
  14. Graether E, Mueller F, 2012. Joggobot: a flying robot as jogging companion. Proc ACM SIGCHI Conf on Human Factors in Computing Systems, p.1063–1066. https://doi.org/10.1145/2212776.2212386Google Scholar
  15. Hall ET, 1966. The Hidden Dimension. Doubleday, New York, USA.Google Scholar
  16. Hansen JP, Alapetite A, MacKenzie IS, et al., 2014. The use of gaze to control drones. Proc Symp on Eye Tracking Research and Applications, p.27–34. https://doi.org/10.1145/2578153.2578156CrossRefGoogle Scholar
  17. He D, Ren HY, Hua WD, et al., 2011. Flyingbuddy: augment human mobility and perceptibility. Proc 13th Int Conf on Ubiquitous Computing, p.615–616. https://doi.org/10.1145/2030112.2030241Google Scholar
  18. Helbing D, Molnár P, 1995. Social force model for pedestrian dynamics. Phys Rev E, 51(5):4282–4286. https://doi.org/10.1103/PhysRevE.51.4282CrossRefGoogle Scholar
  19. Lichtenstern M, Frassl M, Perun B, et al., 2012. A prototyping environment for interaction between a human and a robotic multi-agent system. Proc 7th ACM/IEEE Int Conf on Human-Robot Interaction, p.185–186. https://doi.org/10.1145/2157689.2157747Google Scholar
  20. Liew CF, Yairi T, 2013. Quadrotor or blimp? Noise and appearance considerations in designing social aerial robot. Proc 8th ACM/IEEE Int Conf on Human-Robot Interaction, p.183–184. https://doi.org/10.1109/HRI.2013.6483562Google Scholar
  21. Lim H, Sinha SN, 2015. Monocular localization of a moving person onboard a quadrotor MAV. Proc IEEE Int Conf on Robotics and Automation, p.2182–2189. https://doi.org/10.1109/ICRA.2015.7139487Google Scholar
  22. Liu W, Anguelov D, Erhan D, et al., 2016. SSD: single shot multibox detector. Proc 14th European Conf on Computer Vision, p.21–37. https://doi.org/10.1007/978-3-319-46448-0_2Google Scholar
  23. Mittal A, Zisserman A, Torr PHS, 2011. Hand detection using multiple proposals. Proc British Machine Vision Conf, p.1–11.Google Scholar
  24. Monajjemi VM, Wawerla J, Vaughan R, et al., 2013. HRI in the sky: creating and commanding teams of UAVs with a vision-mediated gestural interface. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.617–623. https://doi.org/10.1109/IROS.2013.6696415Google Scholar
  25. Monajjemi VM, Mohaimenianpour S, Vaughan R, 2016. UAV, come to me: end-to-end, multi-scale situated HRI with an uninstrumented human and a distant UAV. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.4410–4417. https://doi.org/10.1109/IROS.2016.7759649Google Scholar
  26. Nagi J, Giusti A, di Caro GA, et al., 2014. Human control of UAVs using face pose estimates and hand gestures. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.252–253. https://doi.org/10.1145/2559636.2559833Google Scholar
  27. Naseer T, Sturm J, Cremers D, 2013. FollowMe: person following and gesture recognition with a quadrocopter. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.624–630. https://doi.org/10.1109/IROS.2013.6696416Google Scholar
  28. Perera AG, Law YW, Chahl J, 2018. Human pose and path estimation from aerial video using dynamic classifier selection. Cogn Comput, 6(10):1019–1041. https://doi.org/10.1007/s12559-018-9577-6CrossRefGoogle Scholar
  29. Peshkova E, Hitz M, Kaufmann B, 2017. Natural interaction techniques for an unmanned aerial vehicle system. IEEE Perv Comput, 16(1):34–42. https://doi.org/10.1109/MPRV.2017.3CrossRefGoogle Scholar
  30. Pourmehr S, Monajjemi VM, Sadat SA, et al., 2014. “You are green”: a touch-to-name interaction in an integrated multi-modal multi-robot HRI system. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.266–267. https://doi.org/10.1145/2559636.2559806Google Scholar
  31. Schneegass S, Alt F, Scheible J, et al., 2014. Midair displays: concept and first experiences with free-floating pervasive displays. Proc Int Symp on Pervasive Displays, Article 27. https://doi.org/10.1145/2611009.2611013Google Scholar
  32. Sharma M, Hildebrandt D, Newman G, et al., 2013. Communicating affect via flight path: exploring use of the Laban effort system for designing affective locomotion paths. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.293–300. https://doi.org/10.1109/HRI.2013.6483602Google Scholar
  33. Srisamosorn V, Kuwahara N, Yamashita A, et al., 2016. Design of face tracking system using fixed 360-degree cameras and flying blimp for health care evaluation. Proc 4th Int Conf on Serviceology.Google Scholar
  34. St-Onge D, Brèches PY, Sharf I, et al., 2017. Control, localization and human interaction with an autonomous lighter-than-air performer. Rob Auton Syst, 88:165–186. https://doi.org/10.1016/j.robot.2016.10.013CrossRefGoogle Scholar
  35. Szafir D, Mutlu B, Fong T, 2014. Communication of intent in assistive free flyers. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.358–365. https://doi.org/10.1145/2559636.2559672Google Scholar
  36. Szafir D, Mutlu B, Fong T, 2015. Communicating directionality in flying robots. Proc 10th Annual ACM/IEEE Int Conf on Human-Robot Interaction, p.19–26. https://doi.org/10.1145/2696454.2696475Google Scholar
  37. Tao QY, Cha J, Hou MX, et al., 2018. Parameter identification of blimp dynamics through swinging motion. Proc 15th Int Conf on Control, Automation, Robotics and Vision. https://doi.org/10.1109/ICARCV.2018.8581376Google Scholar
  38. Viola P, Jones MJ, 2004. Robust real-time face detection. Int J Comput Vis, 57(2):137–154. https://doi.org/10.1023/B:VISI.0000013087.49260.fbCrossRefGoogle Scholar
  39. Wold S, Esbensen K, Geladi P, 1987. Principal component analysis. Chemom Intell Lab Syst, 2(1-3):37–52. https://doi.org/10.1016/0169-7439(87)80084-9CrossRefGoogle Scholar
  40. Yao NS, Anaya E, Tao QY, et al., 2017. Monocular visionbased human following on miniature robotic blimp. Proc IEEE Int Conf on Robotics and Automation, p.3244–3249. https://doi.org/10.1109/ICRA.2017.7989369Google Scholar

Copyright information

© Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaUSA
  2. 2.College of ComputingGeorgia Institute of TechnologyAtlantaUSA

Personalised recommendations