Journal of Intelligent & Robotic Systems

, Volume 66, Issue 1–2, pp 245–272 | Cite as

Johnny: An Autonomous Service Robot for Domestic Environments

  • Thomas Breuer
  • Geovanny R. Giorgana Macedo
  • Ronny Hartanto
  • Nico Hochgeschwender
  • Dirk Holz
  • Frederik Hegger
  • Zha Jin
  • Christian Müller
  • Jan Paulus
  • Michael Reckhaus
  • José Antonio Álvarez Ruiz
  • Paul G. Plöger
  • Gerhard K. Kraetzschmar
Article

Abstract

In this article we describe the architecture, algorithms and real-world benchmarks performed by Johnny Jackanapes, an autonomous service robot for domestic environments. Johnny serves as a research and development platform to explore, develop and integrate capabilities required for real-world domestic service applications. We present a control architecture which allows to cope with various and changing domestic service robot tasks. A software architecture supporting the rapid integration of functionality into a complete system is as well presented. Further, we describe novel and robust algorithms centered around multi-modal human robot interaction, semantic scene understanding and SLAM. Evaluation of the complete system has been performed during the last years in the RoboCup@Home competition where Johnnys outstanding performance led to successful participation. The results and lessons learned of these benchmarks are explained in more detail.

Keywords

Domestic service robots Semantic scene understanding Human robot interaction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Arras, K.O., Mozos, O.M., Burgard, W.: Using boosted features for the detection of people in 2d range data. In: IEEE International Conference on Robotics and Automation 2007, pp. 3402–3407. IEEE, Piscataway (2007)CrossRefGoogle Scholar
  2. 2.
    Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F.: The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, Cambridge (2003)MATHGoogle Scholar
  3. 3.
    Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 2, pp. 568–573 (2005)Google Scholar
  4. 4.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Comput. Vis. Image Understand. 110(3), 346–359 (2008). Similarity Matching in Computer Vision and MultimediaCrossRefGoogle Scholar
  5. 5.
    Beetz, M., Jain, D., Mösenlechner, L., Tenorth, M.: Towards performing everyday manipulation activities. Robot. Auton. Syst. 58(9), 1085–1095 (2010)CrossRefGoogle Scholar
  6. 6.
    Besl, P.J., McKay, N.D.: A method for registration of 3–D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)CrossRefGoogle Scholar
  7. 7.
    Boll, S.F.: Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans. Acoust. Speech Signal Process. 27(2), 113–120 (1979)CrossRefGoogle Scholar
  8. 8.
    Breuer, T.: Advanced Gesture-based HRI for a Mobile Manipulator. Master’s thesis, Bonn-Rhein-Sieg University of Applied Sciences, Germany (2010)Google Scholar
  9. 9.
    Breuer, T., Ploeger, P.G., Kraetzschmar, G.K.: Precise pointing target recognition for human-robot interaction. In: SIMPAR Workshop on Domestic Service Robots in the Real World ’10 (2010)Google Scholar
  10. 10.
    Buehler, M., Iagnemma, K., Singh, S.: The 2005 DARPA Grand Challenge: The Great Robot Race, 1st edn. Springer, New York (2007)CrossRefGoogle Scholar
  11. 11.
    Buehler, M., Iagnemma, K., Singh, S.: The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, 1st edn. Springer, New York (2009)Google Scholar
  12. 12.
    Casagrande, N.: Automatic Music Classification Using Boosting Algorithms and Auditory Features. Master’s thesis, Department of Informatic and Operational Research, University of Montreal (2005)Google Scholar
  13. 13.
    Casagrande, N.: Multiboost: An Open Source Multi-class Adaboost Learner. http://iro.umontreal.ca/~casagran/multiboost/ (2005)
  14. 14.
    Chekhlov, D., Pupilli, M., Mayol-Cuevas, W., Calway, A.: Real-time and robust monocular SLAM using predictive multi-resolution descriptors. In: Proceedings of the 2nd International Symposium on Visual Computing, pp. 276–285. Lake Tahoe, Nevada, USA (2006)Google Scholar
  15. 15.
    Chen, Y., Medioni, G.: Object modelling by registration of multiple range images. Image Vis. Comput. (Special Issue on Range Image Understanding) 10(3), 145–155 (1992)Google Scholar
  16. 16.
    Cortes, C., Vapnik, V.: Support-vector networks. In: Machine Learning, pp. 273–297 (1995)Google Scholar
  17. 17.
    Dance, C., Willamowski, J., Fan, L., Bray, C., Csurka, G.: Visual categorization with bags of keypoints. In: ECCV International Workshop on Statistical Learning in Computer Vision (2004)Google Scholar
  18. 18.
    Deng, H.-b., Jin, L.-w., Zhen, L.-x., Huang, J.-c.: A new facial expression recognition method based on local Gabor Filter Bank and PCA plus LDA. Int. J. Inf. Technol. 11(11), 86–96 (2005)Google Scholar
  19. 19.
    Do, H., Silverman, H.F., Yu, Y.: A real-time SRP-PHAT source location implementation using stochasticregion contraction (SRC) on a large-aperture microphone array. In: Proceedings of the IEEE International Conference on Acoustics, Speech,and Signal Processing 2007 (ICASSP 2007), vol. 1, pp. 121–124 (2007)Google Scholar
  20. 20.
    Duin, R.P.W.: The combining classifier: to train or not to train? In: International Conference on Pattern Recognition, vol. 2 (2002)Google Scholar
  21. 21.
    Dunn, J.C.: A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters. Cybern. Syst. 3, 32–57 (1973)MathSciNetMATHCrossRefGoogle Scholar
  22. 22.
    Eckes, C., Biatov, K., Hülsken, F., Köhler, J., Breuer, P., Branco, P., Encarnação, L.M.: Towards sociable virtual humans: multimodal recognition of human input and behavior. Int. J. Virtual Real. 6(4), 21–30 (2007)Google Scholar
  23. 23.
    Freund, Y., Schapire, R.E., Hill, M.: Experiments with a new boosting algorithm. In: Machine Learning: Proceedings of the Thirteenth International Conference (1996)Google Scholar
  24. 24.
    Gat, E.: Integrating planning and reacting in a heterogeneous asynchronous architecture for controlling realworld mobile robots. In: Proceedings of the National Conference on Artificial Intelligence (AAAI) (1992)Google Scholar
  25. 25.
    Ghallab, M., Nau, D., Traverso, P.: Automated Planning: Theory and Practice. Morgan Kaufmann, San Mateo (2004)MATHGoogle Scholar
  26. 26.
    Grisetti, G., Stachniss, C., Burgard, W.: Improved techniques for grid mapping with Rao–Blackwellized particle filters. IEEE Trans. Rob. 23(1), 34–46 (2007)CrossRefGoogle Scholar
  27. 27.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)MATHGoogle Scholar
  28. 28.
    Hartanto, R.: Fusing dl Reasoning with HTN Planning as a Deliberative Layer in Mobile Robotics. PhD thesis, University of Osnabrück (2009)Google Scholar
  29. 29.
    Henning, M.: A new approach to object-oriented middleware. In: IEEE Internet Computing (2004)Google Scholar
  30. 30.
    Hess, R., Fern, A.: Discriminatively trained particle filters for complex multi-object tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 240–247. IEEE, Piscataway (2009)Google Scholar
  31. 31.
    Holz, D., Behnke, S.: Sancta simplicitas—on the efficiency and achievable results of SLAM using ICP-based incremental registration. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 1380–1387. Anchorage, Alaska, USA (2010)Google Scholar
  32. 32.
    Holz, D., Kraetzschmar, G.K., Rome, E.: Robust and computationally efficient navigation in domestic environments. In: Baltes, J., Lagoudakis, M.G., Naruse, T., Shiry, S. (eds.) RoboCup 2009: Robot Soccer World Cup XIII, Lecture Notes in Computer Science, vol. 5949/2010, pp. 104–115. Springer, Berlin (2009)Google Scholar
  33. 33.
    Holz, D., Basilico, N., Amigoni, F., Behnke, S.: Evaluating the efficiency of frontier-based exploration strategies. In: Proceedings of the Joint Conference of the 41st International Symposium on Robotics (ISR 2010) and the 6th German Conference on Robotics (ROBOTIK 2010), pp. 36–43. Munich, Germany (2010)Google Scholar
  34. 34.
    Holz, D., Droeschel, D., Behnke, S., May, S., Surmann, H.: Fast 3D perception for collision avoidance and SLAM in domestic environments. In: Barrera, A. (ed.) Mobile Robots Navigation, pp. 53–84. IN-TECH Education and Publishing, Vienna (2010)Google Scholar
  35. 35.
    Horn, B.K.P.: Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. A 4(4), 629–642 (1987)MathSciNetCrossRefGoogle Scholar
  36. 36.
    Hu, M.-K.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)MATHCrossRefGoogle Scholar
  37. 37.
    Jackson, J.: Microsoft robotics studio: a technical introduction. IEEE Robot. Autom. Mag. 14, 82–87 (2007)CrossRefGoogle Scholar
  38. 38.
    Jiang, Y.G., Ngo, C.W., Yang, J.: Towards optimal bag-of-features for object categorization and semantic video retrieval. In: CIVR ’07: Proceedings of the 6th ACM International Conference on Image and Video Retrieval. ACM, New York (2007)Google Scholar
  39. 39.
    Jurie, F., Triggs, B.: Creating efficient codebooks for visual recognition. In: ICCV ’05: Tenth IEEE Intl. Conf. on Computer Vision, vol. 1, pp. 604–610 (2005)Google Scholar
  40. 40.
    Kahn, R.E., Swain, M.J., Prokopowicz, P.N., Firby, R.J.: Gesture recognition using the perseus architecture. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 734–741 (1996)Google Scholar
  41. 41.
    Kanade, T., Cohn, J., Tian, Y.-l.: Comprehensive database for facial expression analysis. In: Proc. of International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)Google Scholar
  42. 42.
    Kehl, R., Van Gool, L.: Real-time pointing gesture recognition for an immersive environment. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 577–582 (2004)Google Scholar
  43. 43.
    Kong, A., Liu, J.S., Wong, W.H.: Sequential imputations and bayesian missing data problems. J. Am. Stat. Assoc. 89(425), 278–288 (1994)MATHCrossRefGoogle Scholar
  44. 44.
    Koutlas, A., Fotiadis, D.I.: An automatic region based methodology for facial expression recognition. In: IEEE International Conference on Systems, Man and Cybernetics (2008) SMC 2008, pp. 662–666 (2008)Google Scholar
  45. 45.
    Kulikowski, J., Marčelja, S., Bishop, P.: Theory of spatial position and spatial frequency relations in the receptive fields of simple cells in the visual cortex. Biol. Cybern. 43, 187–198 (1982). doi:10.1007/BF00319978 MATHCrossRefGoogle Scholar
  46. 46.
    Leonard, J.J., Feder, H.J.S.: A computationally efficient method for large-scale concurrent mapping and localization. In: Koditschek, D., Hollerbach, J. (eds.) International Symposium on Robotics Research. Snowbird, Utah, USA (1999)UGoogle Scholar
  47. 47.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)CrossRefGoogle Scholar
  48. 48.
    Lucas, S.M., Panaretos, A., Sosa, L., Tang, A., Wong, S., Young, R., Ashida, K., Nagai, H., Okamoto, M., Yamamoto, H., et al.: ICDAR 2003 robust reading competitions: entries, results, and future directions. Int. J. Doc. Anal. Recogn. 7(2), 105–122 (2005)CrossRefGoogle Scholar
  49. 49.
    May, S., Droeschel, D., Holz, D., Fuchs, S., Malis, E., Nüchter, A., Hertzberg, J.: Three-dimensional mapping with time-of-flight cameras. J. Field Robot. (Special Issue on Three-Dimensional Mapping, Part 2) 26(11–12), 934–965 (2009)Google Scholar
  50. 50.
    Meeussen, W., Wise, M., Glaser, S., Chitta, S., McGann, C., Mihelich, P., Marder-Eppstein, E., Muja, M., Eruhimov, V., Foote, T., Hsu, J., Rusu, R.B., Marthi, B., Bradski, G., Konolige, K., Gerkey, B.P., Berger, E.: Autonomous door opening and plugging in with a personal robot. In: ICRA (2010)Google Scholar
  51. 51.
    Moosmann, F., Triggs, B., Jurie, F.: Fast discriminative visual codebooks using randomized clustering forests. In: Advances in Neural Information Processing Systems, pp. 985–992 (2006)Google Scholar
  52. 52.
    Mozos, O.M., Kurazume, R., Hasegawa, T.: Multi-layer people detection using 2d range data. In: IEEE ICRA 2009 Workshop on People Detection and Tracking (2009)Google Scholar
  53. 53.
    Nau, D., Ilghami, O., Kuter, U., Murdock, J.W., Wu, D., Yaman, F.: SHOP2: an HTN planning system. J. Artif. Intell. Res. 20, 379–404 (2003)MATHGoogle Scholar
  54. 54.
    Nickel, K., Stiefelhagen, R.: Visual recognition of pointing gestures for human-robot interaction. Image Vision Comput. 25(12), 1875–1884 (2007)CrossRefGoogle Scholar
  55. 55.
    Nowak, E., Jurie, F., Triggs, B.: Sampling strategies for bag-of-features image classification. In: European Conference on Computer Vision, pp. 490–503 (2006)Google Scholar
  56. 56.
    Ploeger, P., Pervoelz, K., Mies, C., Eyerich, P., Brenner, M., Nebel, B.: The desire service robotics initiative. In: KI—Zeitschrift Küunstliche Intelligenz, vol. 4 (2008)Google Scholar
  57. 57.
    Posner, I., Corke, P., Newman, P.: Using text-spotting to query the world. In: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (2010). doi:10.1109/IROS.2010.5653151
  58. 58.
    Premebida, C., Nunes, U.: Segmentation and geometric primitives extraction from 2d laser range data for mobile robot applications. In: Robotica 2005 Scientific Meeting of the 5th National Robotics Festival, pp. 17–25. Coimbra, Portual (2005)Google Scholar
  59. 59.
    Ray, C., Mondada, F., Siegwart, R.: What do people expect from robots? In: Proceedings of the IEEE/RSJ 2008 International Conference on Intelligent Robots and Systems, pp. 3816–3821. IEEE, Piscataway (2008)CrossRefGoogle Scholar
  60. 60.
    Reiser, U., Connette, C., Fischer, J., Kubacki, J., Bubeck, A., Weisshardt, F., Jacobs, T., Parlitz, C., Hägele, M., Verl, A.: Care-o-bot 3—creating a product vision for service robot applications by integrating design and technology. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009) (2009)Google Scholar
  61. 61.
    Ruiz, J.A.Á.: Text information extraction for a domestic service robot. In: SIMPAR Workshop on Domestic Service Robots in the Real World ’10 (2010)Google Scholar
  62. 62.
    Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Mach. Learn. 37, 297–336 (1999)MATHCrossRefGoogle Scholar
  63. 63.
    Schneider, F.E., Wildermuth, D., Brüggemann, B., Röhling, T.: European land robot trial (ELROB). Towards a realistic benchmark for outdoor robotics. In: Proceedings of the 1st International Conference on Robotics in Education, RiE2010, pp. 65–70. FEI STU, Slovakia (2010)Google Scholar
  64. 64.
    Shih, F.Y., Chuang, C.-F.: Automatic extraction of head and face boundaries and facial features. Inf. Sci. Inf. Comput. Sci. 158, 117–130 (2004)Google Scholar
  65. 65.
    Sirin, E., Parsia, B., Grau, B.C., Kalyanpur, A., Katz, Y.: Pellet: a practical OWL-dl reasoner. J. Web Semantics 5(2), 51–53 (2007)CrossRefGoogle Scholar
  66. 66.
    Srinivasa, S.S., Ferguson, D., Helfrich, C.J., Berenson, D., Collet, A., Diankov, R., Gallagher, G., Hollinger, G., Kuffner, J., Weghe, M.V.: Herb: a home exploring robotic butler. Auton. Robots 28(1), 5–20 (2010)CrossRefGoogle Scholar
  67. 67.
    Szyperski, C.: Component Software: Beyond Object-Oriented Programming. Addison-Wesley Professional, Reading (2002)Google Scholar
  68. 68.
    Teh, C.-H., Chin, R.T.: On image analysis by the methods of moments. IEEE Trans. Pattern Anal. Mach. Intell. 10(4), 496–513 (1988)MATHCrossRefGoogle Scholar
  69. 69.
    Thrun, S., Liu, Y., Koller, D., Ng, A.Y., Ghahramani, Z., Durrant-Whyte, H.: Simultaneous localization and mapping with sparse extended information filters. Int. J. Rob. Res. 23(7–8), 693–716 (2004)CrossRefGoogle Scholar
  70. 70.
    Wisspeintner, T., Nowak, W., Bredenfeld, A: Volksbot—a flexible component-based mobile robot system. In: Proceedings of the RoboCup Symposium (2005)Google Scholar
  71. 71.
    Wisspeintner, T., van der Zan, T., Iocchi, L., Schiffer, S.: Robocup@home: results in benchmarking domestic service robots. In: Baltes, J., Lagoudakis, M., Naruse, T., Ghidary, S. (eds.) RoboCup 2009: Robot Soccer World Cup XIII, Lecture Notes in Computer Science, vol. 5949, pp. 390–401. Springer, Berlin (2010)CrossRefGoogle Scholar
  72. 72.
    Wisspeintner, T., van der Zant, T., Iocchi, L., Schiffer, S.: Robocuphome: scientific competition and benchmarking for domestic service robots. Interact. Stud. 10(35), 392–426 (2009)CrossRefGoogle Scholar
  73. 73.
    Wolf, C., Jolion, J.-M.: Object count/area graphs for the evaluation of object detection and segmentation algorithms. Int. J. Doc. Anal. Recogn. 8(4), 280–296 (2006)CrossRefGoogle Scholar
  74. 74.
    Yamamoto, Y., Yoda, I., Sakaue, K.: Arm-pointing gesture interface using surrounded stereo cameras system. In: ICPR ’04: Proceedings of the Pattern Recognition, 17th International Conference on (ICPR’04), vol. 4, pp. 965–970. IEEE Computer Society, Washington, DC (2004)CrossRefGoogle Scholar
  75. 75.
    Zhang, Z.: Iterative point matching for registration of free-form curves. In: IRA Rapports de Recherche, Programme 4: Robotique, Image et Vision 1658. Institut National de Recherche en Informatique et en Automatique (INRIA), Valbonne Cedex (1992)Google Scholar
  76. 76.
    Zhang, D., Lu, G.: Review of shape representation and description techniques. Pattern Recogn. 37(1), 1–19 (2004)MATHCrossRefGoogle Scholar
  77. 77.
    Zinßer, T., Schmidt, J., Niemann, H.: A refined ICP algorithm for robust 3-D correspondence estimation. In: Proceedings of the International Conference on Image Processing (ICIP), pp. 695–698. Barcelona, Spain (2003)Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  • Thomas Breuer
    • 1
  • Geovanny R. Giorgana Macedo
    • 1
  • Ronny Hartanto
    • 1
    • 2
  • Nico Hochgeschwender
    • 1
  • Dirk Holz
    • 1
    • 3
  • Frederik Hegger
    • 1
  • Zha Jin
    • 1
  • Christian Müller
    • 1
  • Jan Paulus
    • 1
  • Michael Reckhaus
    • 1
  • José Antonio Álvarez Ruiz
    • 1
  • Paul G. Plöger
    • 1
  • Gerhard K. Kraetzschmar
    • 1
  1. 1.Bonn-Rhine-Sieg University of Applied, SciencesSankt AugustinGermany
  2. 2.DFKI—Robotics Innovation CenterBremenGermany
  3. 3.Rheinische Friedrich-Wilhelms-UniversitätBonnGermany

Personalised recommendations