Advertisement

homer@UniKoblenz: Winning Team of the RoboCup@Home Open Platform League 2017

  • Raphael MemmesheimerEmail author
  • Vikor Seib
  • Dietrich Paulus
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11175)

Abstract

In this paper we present the approaches that we used for this year’s RoboCup@Home participation in the Open Platform League. A special focus was put on team collaboration by handing over objects between two robots of different teams that were not connected by network. The robots communicated using natural language (speech synthesis, speech recognition), a typical human-robot “interface” that was adapted to robot-robot interaction. Furthermore, we integrated new approaches for online tracking and learning of an operator, have shown a novel approach for teaching a robot new commands by describing them using natural language and a set of previously known commands. Parameters of these commands are still interchangeable. Finally, we integrated deep neural networks for person detection and recognition, human pose estimation, gender classification and object recognition.

Keywords

RoboCup@Home RoboCup Open Platform League Domestic service robotics homer@UniKoblenz 

Notes

Acknowledgement

First we want to thank the participating students Niklas Yann Wettengel, Florian Polster, Lukas Buchhold, Malte Roosen, Moritz Löhne, Ivanna Myckhalchychyna and Daniel Müller. Thanks to Nuance Communications Inc. for supporting the team with an academic licence for speech recognition. Further we want to thank NVIDIA for the grant of a graphics card that has been used for training the operator re-identification and the object classification.

References

  1. 1.
    van Beek, L., Holz, D., Matamoros, M., Rascon, C., Wachsmuth, S.: Robocup@home 2017: Rules and regulations (2017). http://www.robocupathome.org/rules/2017_rulebook.pdf
  2. 2.
    Stückler, J., Behnke, S.: Adaptive tool-use strategies for anthropomorphic service robots. In: 2014 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 755–760. IEEE (2014)Google Scholar
  3. 3.
    Quigley, M., et al.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009)Google Scholar
  4. 4.
    Seib, V., Kusenbach, M., Thierfelder, S., Paulus, D.: Object recognition using Hough-transform clustering of surf features. In: International Workshops on Electrical and Computer Engineering Subfields, pp. 176–183. Scientific Cooperations Publications (2014)Google Scholar
  5. 5.
    Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, 7–12 June 2015, pp. 1–9. IEEE Computer Society (2015)  https://doi.org/10.1109/CVPR.2015.7298594
  6. 6.
    Chitta, S., Sucan, I., Cousins, S.: Moveit![ros topics]. IEEE Rob. Autom. Mag. 19(1), 18–19 (2012)CrossRefGoogle Scholar
  7. 7.
    Kuffner, J.J., LaValle, S.M.: RRT-connect: an efficient approach to single-query path planning. In: IEEE International Conference on Proceedings of the Robotics and Automation, ICRA 2000, vol. 2, pp. 995–1001. IEEE (2000)Google Scholar
  8. 8.
    Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Auton. Rob. 34(3), 189–206 (2013)CrossRefGoogle Scholar
  9. 9.
    Ford, B.: Parsing expression grammars: a recognition-based syntactic foundation. In: Jones, N.D., Leroy, X. (eds.) Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2004, Venice, Italy, 14–16 January 2004, pp. 111–122. ACM (2004).  https://doi.org/10.1145/964001.964011
  10. 10.
    Lu, D.V., Smart, W.D.: Towards more efficient navigation for robots and humans. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1707–1713. IEEE (2013)Google Scholar
  11. 11.
    Munaro, M., Menegatti, E.: Fast RGB-D people tracking for service robots. Auton. Rob. 37(3), 227–242 (2014)CrossRefGoogle Scholar
  12. 12.
    Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)CrossRefGoogle Scholar
  13. 13.
    Levi, G., Hassner, T.: Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 503–510. ACM (2015)Google Scholar
  14. 14.
    Cao, Z., Simon, T., Wei, S.-E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: CVPR (2017)Google Scholar
  15. 15.
    Wei, S.-E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: CVPR (2016)Google Scholar
  16. 16.
    Ventura, R., et al.: Socrob@ home: team description paper for robocup@ home (2016)Google Scholar
  17. 17.
    Demura, K., et al.: Happy mini 2017 team description paperGoogle Scholar
  18. 18.
    Kuniyoshi, Y., Inaba, M., Inoue, H.: Learning by watching: extracting reusable task knowledge from visual observation of human performance. IEEE Trans. Rob. Autom. 10(6), 799–822 (1994).  https://doi.org/10.1109/70.338535CrossRefGoogle Scholar
  19. 19.
    Rusu, R.B., Cousins, S.: 3D is here: point cloud library (PCL). In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–4. IEEE (2011)Google Scholar
  20. 20.
    Aldoma, A., et al.: Tutorial: point cloud library: three-dimensional object recognition and 6 DOF pose estimation. IEEE Rob. Autom. Mag. 19(3), 80–91 (2012)CrossRefGoogle Scholar
  21. 21.
    Knopp, J., Prasad, M., Willems, G., Timofte, R., Van Gool, L.: Hough transform and 3D SURF for robust three dimensional classification. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6316, pp. 589–602. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15567-3_43CrossRefGoogle Scholar
  22. 22.
    Gibson, J.J.: The Ecological Approach to Visual Perception. Routledge, Abingdon (1986)Google Scholar
  23. 23.
    Seib, V., Knauf, M., Paulus, D.: Affordance origami: unfolding agent models for hierarchical affordance prediction. In: Braz, J., et al. (eds.) VISIGRAPP 2016. CCIS, vol. 693, pp. 555–574. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-64870-5_27CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Raphael Memmesheimer
    • 1
    Email author
  • Vikor Seib
    • 1
  • Dietrich Paulus
    • 1
  1. 1.Active Vision Group, Institute for Computational VisualisticsUniversity of Koblenz-LandauKoblenzGermany

Personalised recommendations