Advertisement

Multi-cue Localization for Soccer Playing Humanoid Robots

  • Hauke Strasdat
  • Maren Bennewitz
  • Sven Behnke
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4434)

Abstract

An essential capability of a soccer playing robot is to robustly and accurately estimate its pose on the field. Tracking the pose of a humanoid robot is, however, a complex problem. The main difficulties are that the robot has only a constrained field of view, which is additionally often affected by occlusions, that the roll angle of the camera changes continously and can only be roughly estimated, and that dead reckoning provides only noisy estimates. In this paper, we present a technique that uses field lines, the center circle, corner poles, and goals extracted out of the images of a low-cost wide-angle camera as well as motion commands and a compass to localize a humanoid robot on the soccer field. We present a new approach to robustly extract lines using detectors for oriented line pints and the Hough transform. Since we first estimate the orientation, the individual line points are localized well in the Hough domain. In addition, while matching observed lines and model lines, we do not only consider their Hough parameters. Our similarity measure also takes into account the positions and lengths of the lines. In this way, we obtain a much more reliable estimate how well two lines fit. We apply Monte-Carlo localization to estimate the pose of the robot. The observation model used to evaluate the individual particles considers the differences of expected and measured distances and angles of the other landmarks. As we demonstrate in real-world experiments, our technique is able to robustly and accurately track the position of a humanoid robot on a soccer field. We also present experiments to evaluate the utility of using the different cues for pose estimation.

References

  1. 1.
    Gutmann, J.S., Weigel, T., Nebel, B.: Fast, accurate, and robust self-localization in polygonal environments. In: IROS. Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (1999)Google Scholar
  2. 2.
    Lauer, S.L.M., Riedmille, M.: Calculating the perfect match: An efficient and accurate approach for robot self-localisation. In: Bredenfeld, A., Jacoff, A., Noda, I., Takahashi, Y. (eds.) RoboCup 2005. LNCS (LNAI), vol. 4020, Springer, Heidelberg (2006)CrossRefGoogle Scholar
  3. 3.
    Schulenburg, E., Weigel, T., Kleiner, A.: Self-localization in dynamic environments based on laser and vision data. In: IROS. Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (2003)Google Scholar
  4. 4.
    Marques, C.F., Lima, P.U.: A localization method for a soccer robot using a vision-based omni-directional sensor. In: Stone, P., Balch, T., Kraetzschmar, G.K. (eds.) RoboCup 2000. LNCS (LNAI), vol. 2019, pp. 96–107. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  5. 5.
    von Hundelshausen, F., Rojas, P.: Localizing a robot by the marking lines on a soccer field. In: CVWW. Proc. of the Computer Vision Winter Workshop (2003)Google Scholar
  6. 6.
    Dellaert, F., Fox, D., Burgard, W., Thrun, S.: Monte Carlo localization for mobile robots. In: ICRA. Proc. of the IEEE Int. Conf. on Robotics & Automation, IEEE Computer Society Press, Los Alamitos (1998)Google Scholar
  7. 7.
    Röfer, T., Laue, T., Thomas, D.: Particle-filter-based self-localization using landmarks and directed lines. In: Bredenfeld, A., Jacoff, A., Noda, I., Takahashi, Y. (eds.) RoboCup 2005. LNCS (LNAI), vol. 4020, Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. 8.
    de Jong, F., Caarls, J., Bartelds, R., Jonker, P.: A two-tiered approach to self-localization. In: Birk, A., Coradeschi, S., Tadokoro, S. (eds.) RoboCup 2001. LNCS (LNAI), vol. 2377, pp. 405–410. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  9. 9.
    Roberts, L.: Machine perception of three-dimensional solids. In: Tippett, J., et al. (eds.) Optical and Electro-optical Information Processing, pp. 157–197. MIT Press, Cambridge (1965)Google Scholar
  10. 10.
    Iocchi, L., Nardi, D.: Hough localization for mobile robots in polygonal environments. Robotics & Autonomous Systems 40, 43–58 (2002)CrossRefGoogle Scholar
  11. 11.
    Enderle, S., Ritter, M., Fox, D., Sablatnög, S., Kraetzschmar, G.K., Palm, G.: Vision-based localization in robocup environments. In: Stone, P., Balch, T., Kraetzschmar, G.K. (eds.) RoboCup 2000. LNCS (LNAI), vol. 2019, pp. 291–296. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  12. 12.
    Hoffmann, J., Spranger, M., Göhring, D., Jüngel, M.: Exploiting the unexpected: Negative evidence modeling and proprioceptive motion modeling for improved markov localization. In: Bredenfeld, A., Jacoff, A., Noda, I., Takahashi, Y. (eds.) RoboCup 2005. LNCS (LNAI), vol. 4020, Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Thomas, P.J., Stonier, R.J., Wolfs, P.J.: Robustness of colour detection for robot soccer. In: ICARCV. Proc. of the 7th International Conference on Control, Automation, Robotics and Vision (2002)Google Scholar
  14. 14.
    Hough, P.V.C.: Machine analysis of bubble chamber pictures. In: CERN. Proc. of the Int. Conf. on High Energy Accelerators and Instrumentation (1959)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Hauke Strasdat
    • 1
  • Maren Bennewitz
    • 1
  • Sven Behnke
    • 1
  1. 1.University of Freiburg, Computer Science Institute, D-79110 FreiburgGermany

Personalised recommendations