Advertisement

Five Years of SSL-Vision – Impact and Development

  • Stefan Zickler
  • Tim Laue
  • José Angelo GurzoniJr.
  • Oliver Birbach
  • Joydeep Biswas
  • Manuela Veloso
Conference paper
  • 2.2k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8371)

Abstract

Since its start in 1997, the setup of the RoboCup Small Size Robot League (SSL) enabled teams to use their own cameras and vision algorithms. In the fast and highly dynamic SSL environment, researchers achieved significant algorithmic advances in real-time complex colored-pattern based perception. Some teams reached, published, and shared effective solutions, but for new teams, vision processing has still been a heavy investment. In addition, it became an organizational burden to handle the multiple cameras from all the teams. Therefore, in 2008, the league started the development of a centralized, shared vision system, called SSL-Vision, which would be provided for all teams. In this paper, we discuss this system’s successful implementation in SSL itself, but also beyond it in other domains. SSL-Vision is an open source system available to any researcher interested in processing colored patterns from static cameras.

Keywords

Pattern Detection Vision Algorithm Color Segmentation Open Source System Color Calibration 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bruce, J., Balch, T., Veloso, M.: Fast and inexpensive color image segmentation for interactive robots. In: Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000), vol. 3, pp. 2061–2066 (2000)Google Scholar
  2. 2.
    Bruce, J., Veloso, M.: Fast and accurate vision-based pattern detection and identification. In: Proceedings of the 2003 IEEE International Conference on Robotics and Automation, ICRA 2003 (2003)Google Scholar
  3. 3.
    Burchardt, A., Laue, T., Röfer, T.: Optimizing particle filter parameters for self-localization. In: Ruiz-del-Solar, J., Chown, E., Plöger, P.G. (eds.) RoboCup 2010. LNCS (LNAI), vol. 6556, pp. 145–156. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  4. 4.
    Coltin, B., Veloso, M.M.: Multi-observation sensor resetting localization with ambiguous landmarks. In: Burgard, W., Roth, D. (eds.) Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2011. AAAI Press, San Francisco (2011)Google Scholar
  5. 5.
    Fergus, R., Perona, P., Zisserman, A.: Object class recognition by unsupervised scale-invariant learning. In: Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003, vol. 2, pp. 264–271 (2003)Google Scholar
  6. 6.
    Google Inc.: Protocol Buffers, http://code.google.com/p/protobuf/
  7. 7.
    Isard, M., MacCormick, J.: Bramble: A bayesian multiple-blob tracker. In: Proceedings of the Eighth IEEE International Conference on Computer Vision, ICCV 2001, vol. 2, pp. 34–41 (2001)Google Scholar
  8. 8.
    Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics 11(2), 431–441 (1963)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Monajjemi, V., Koochakzadeh, A., Ghidary, S.S.: grSim - RoboCup Small Size robot soccer simulator. In: Röfer, T., Mayer, N.M., Savage, J., Saranlı, U. (eds.) RoboCup 2011. LNCS, vol. 7416, pp. 450–460. Springer, Heidelberg (2012)Google Scholar
  10. 10.
    Naruse, T., Masutani, Y., Mitsunaga, N., Nagasaka, Y., Fujii, T., Watanabe, M., Nakagawa, Y., Naito, O.: Ssl-humanoid. In: Ruiz-del-Solar, J., Chown, E., Plöger, P.G. (eds.) RoboCup 2010. LNCS (LNAI), vol. 6556, pp. 60–71. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  11. 11.
    Nguyen, H., Smeulders, A.: Fast occluded object tracking by a robust appearance filter. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(8), 1099–1104 (2004)CrossRefGoogle Scholar
  12. 12.
    Röfer, T., Laue, T., Müller, J., Bösche, O., Burchardt, A., Damrose, E., Gillmann, K., Graf, C., de Haas, T.J., Härtl, A., Rieskamp, A., Schreck, A., Sieverdingbeck, I., Worch, J.H.: B-human team report and code release 2009 (2009), only available online: http://www.b-human.de/file_download/26/bhuman09_coderelease.pdf
  13. 13.
    Seekircher, A., Laue, T., Röfer, T.: Entropy-based active vision for a humanoid soccer robot. In: Ruiz-del-Solar, J., Chown, E., Ploeger, P.G. (eds.) RoboCup 2010. LNCS (LNAI), vol. 6556, pp. 1–12. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Veloso, M., Lenser, S., Vail, D., Rybski, P., Aiwazian, N., Chernova, S.: CMRoboBits: Creating an intelligent AIBO robot. In: 2004 AAAI Spring Symposium on Accessible Hands-on Artificial Intelligence and Robotics Education (March 2004)Google Scholar
  15. 15.
    Zickler, S.: The VarTypes System, http://code.google.com/p/vartypes/
  16. 16.
    Zickler, S.: Physics-Based Robot Motion Planning in Dynamic Multi-Body Environments. Ph.D. thesis, Carnegie Mellon University, Thesis Number: CMU-CS-10-115 (May 2010)Google Scholar
  17. 17.
    Zickler, S., Laue, T., Birbach, O., Wongphati, M., Veloso, M.: SSL-vision: The shared vision system for the RoboCup Small Size League. In: Baltes, J., Lagoudakis, M.G., Naruse, T., Ghidary, S.S. (eds.) RoboCup 2009. LNCS (LNAI), vol. 5949, pp. 425–436. Springer, Heidelberg (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Stefan Zickler
    • 1
  • Tim Laue
    • 2
  • José Angelo GurzoniJr.
    • 3
  • Oliver Birbach
    • 2
  • Joydeep Biswas
    • 4
  • Manuela Veloso
    • 4
  1. 1.iRobot Corp.BedfordUSA
  2. 2.Cyber-Physical SystemsDeutsches Forschungszentrum für Künstliche Intelligenz GmbHBremenGermany
  3. 3.AI and Robotics Lab.Centro Universitário da FEIS. Bernardo do CampoBrazil
  4. 4.Robotics Institute, Computer Science DepartmentCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations