Advertisement

A ROS-Based System for an Autonomous Service Robot

  • Viktor SeibEmail author
  • Raphael Memmesheimer
  • Dietrich Paulus
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 625)

Abstract

The Active Vision Group (AGAS) has gained plenty of experience in robotics over the past years. This contribution focuses on the area of service robotics. We present several important components that are crucial for a service robot system: mapping and navigation, object recognition, speech synthesis and speech recognition. A detailed tutorial on each of these packages is given in the presented chapter. All of the presented components are published on our ROS package repository: http://wiki.ros.org/agas-ros-pkg.

Keywords

Service robots SLAM Navigation Object recognition Human-robot interaction Speech recognition Robot face 

Notes

Acknowledgments

The presented software was developed in the Active Vision Group by research associates and students of the practical courses with the robots “Lisa” and “Robbie”. The authors would like to thank Dr. Johannes Pellenz, David Gossow, Susanne Thierfelder, Julian Giesen and Malte Knauf for their contributions to the software. Further, the authors would like to thank Baharak Rezvan for her assistance in testing the setup procedure of the described packages.

References

  1. 1.
    I. Albrecht, J. Haber, K. Kahler, M. Schroder, H.P. Seidel, May i talk to you?:-)-facial animation from text. In: Proceedings 10th Pacific Conference on Computer Graphics and Applications, 2002. (IEEE, 2002) pp. 77–86Google Scholar
  2. 2.
    H. Bay, T. Tuytelaars, L.J. Van Gool. SURF: Speeded up robust features. ECCV, pp. 404–417, 2006Google Scholar
  3. 3.
    C. Breazeal, Toward sociable robots. Robot. Auton. Syst. 42(3), 167–175 (2003)CrossRefzbMATHGoogle Scholar
  4. 4.
    C. Breazeal, B. Scassellati, How to build robots that make friends and influence people. In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems, 1999. IROS’99, vol. 2, (IEEE, 1999), pp. 858–863Google Scholar
  5. 5.
    W. Eric, L. Grimson, D.P. Huttenlocher, On the sensitivity of the hough transform for object recognition. IEEE Trans. Pattern Anal. Mach. Intell. PAMI 12(3), 255–274 (1990)CrossRefGoogle Scholar
  6. 6.
    J. Gustafson, M. Lundeberg, J. Liljencrants, Experiences from the development of august—a multi- modal spoken dialogue system. In: ESCA Workshop on Interactive Dialogue in Multi- Modal Systems (IDS-99) (1999), pp. 61–64Google Scholar
  7. 7.
    David G. Lowe, Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  8. 8.
    I. Lütkebohle, F. Hegel, S. Schulz, M. Hackel, B. Wrede, S. Wachsmuth, G. Sagerer, The bielefeld anthropomorphic robot head flobi. In: 2010 IEEE International Conference on Robotics and Automation, (IEEE, Anchorage, Alaska, 2010) 5Google Scholar
  9. 9.
    M. Mori, K.F. MacDorman, N. Kageki, The uncanny valley [from the field]. Robot. Autom. Mag. IEEE 19(2), 98–100 (2012)CrossRefGoogle Scholar
  10. 10.
    M. Muja, Flann, fast library for approximate nearest neighbors (2009). http://mloss.org/software/view/143/
  11. 11.
    A. Niswar, E.P. Ong, H.T. Nguyen, Z. Huang, Real-time 3d talking head from a synthetic viseme dataset. In: Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry, (ACM, 2009), pp. 29–33Google Scholar
  12. 12.
    J. Ruiz-del-Solar, M. Mascaró, M. Correa, F. Bernuy, R. Riquelme, R. Verschae, Analyzing the human-robot interaction abilities of a general-purpose social robot in different naturalistic environments, RoboCup Symposium 2009, Lecture Notes in Computer Science (Springer, Berlin, 2009), pp. 308–319Google Scholar
  13. 13.
    V. Seib, J. Giesen, D. Grüntjens, D. Paulus, Enhancing human-robot interaction by a robot face with facial expressions and synchronized lip movements, ed. by V. Skala. 21st International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (2013)Google Scholar
  14. 14.
    Viktor Seib, Michael Kusenbach, Susanne Thierfelder, and Dietrich Paulus. Object recognition using hough-transform clustering of surf features. In: Workshops on Electronical and Computer Engineering Subfields, (Scientific Cooperations Publications, 2014), pp. 169–176Google Scholar
  15. 15.
    S. Sosnowski, A. Bittermann, K. Kuhnlenz, M. Buss, Design and evaluation of emotion-display eddie. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006 (IEEE, 2006), pp. 3113–3118Google Scholar
  16. 16.
    S. Wirth, J. Pellenz, Exploration transform: A stable exploring algorithm for robots in rescue environments. In: IEEE International Workshop on Safety, Security and Rescue Robotics, 2007. SSRR 2007, (2007), pp. 1–5Google Scholar
  17. 17.
    A. Zelinsky, Robot navigation with learning. Aust. Comput. J. 20(2), 85–93 (1988). 5Google Scholar
  18. 18.
    A. Zelinsky, Environment Exploration and Path Planning Algorithms for a Mobile Robot using Sonar, Ph.D. thesis, Wollongong University, Australia, 1991Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Viktor Seib
    • 1
    Email author
  • Raphael Memmesheimer
    • 1
  • Dietrich Paulus
    • 1
  1. 1.Active Vision Group (AGAS), University of Koblenz-LandauKoblenzGermany

Personalised recommendations