Advertisement

Selective Visual Attention for Object Detection on a Legged Robot

  • Daniel Stronger
  • Peter Stone
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4434)

Abstract

Autonomous robots can use a variety of sensors, such as sonar, laser range finders, and bump sensors, to sense their environments. Visual information from an onboard camera can provide particularly rich sensor data. However, processing all the pixels in every image, even with simple operations, can be computationally taxing for robots equipped with cameras of reasonable resolution and frame rate. This paper presents a novel method for a legged robot equipped with a camera to use selective visual attention to efficiently recognize objects in its environment. The resulting attention-based approach is fully implemented and validated on an Aibo ERS-7. It effectively processes incoming images 50 times faster than a baseline approach, with no significant difference in the efficacy of its object detection.

Keywords

Target Object Object Detection Baseline Method Baseline Approach Selective Visual Attention 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Yarbus, A.L.: Eye movements during perception of complex objects. In: Riggs, L.A. (ed.) Eye movements and vision, pp. 171–196. Plenum Press, New York (1967)Google Scholar
  2. 2.
    Sprague, N., Ballard, D., Robinson, A.: Modeling attention with embodied visual behaviors (2005), http://www.cs.rochester.edu/~dana/WalterTheory25.pdf
  3. 3.
    Mitsunaga, N., Asada, M.: Sensor space segmentation for visual attention control of a mobile robot based on information criterion. In: Proceedings of the IEEE International Conference on Intellegent Robots and Systems, IEEE Computer Society Press, Los Alamitos (2001)Google Scholar
  4. 4.
    Kwok, C., Fox, D.: Reinforcement learning for sensing strategies. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, IEEE Computer Society Press, Los Alamitos (2004)Google Scholar
  5. 5.
    Najemnik, J., Geisler, W.: Optimal eye movement strategies in visual search. Nature 434, 387–391 (2005)CrossRefGoogle Scholar
  6. 6.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  7. 7.
    Salah, A.A., Alpaydin, E., Akarun, L.: A selective attention-based method for visual pat- tern recognition with application to handwritten digit recognition and face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(3) (March 2002)Google Scholar
  8. 8.
    Walther, D., Rutishauser, U., Koch, C., Perona, P.: On the usefulness of attention for object recognition. In: The 2nd Workshop on Attention and Performance in Computer Vision (2004)Google Scholar
  9. 9.
    Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society Press, Los Alamitos (1994)Google Scholar
  10. 10.
    Baluja, S., Pomerleau, D.: Expectation-based selective attention for visual monitoring and control of a robot vehicle. Robotics and Autonomous Systems 22(3–4) (1997)Google Scholar
  11. 11.
    Dellaert, F., Fox, D., Burgard, W., Thrun, S.: Monte carlo localization for mobile robots. In: ICRA 1999. Proceedings of the IEEE International Conference on Robotics and Automation, IEEE Computer Society Press, Los Alamitos (1999)Google Scholar
  12. 12.
    Sridharan, M., Kuhlmann, G., Stone, P.: Practical vision-based monte carlo localization on a legged robot. In: IEEE International Conference on Robotics and Automation, April 2005, IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  13. 13.
    Schilling, R.: Fundamentals of Robotics: Analysis and Control. Prentice-Hall, Englewood Cliffs (2000)Google Scholar
  14. 14.
    Stone, P., Dresner, K., Fidelman, P., Jong, N.K., Kohl, N., Kuhlmann, G., Sridharan, M., Stronger, D.: The UT Austin Villa 2004 RoboCup four-legged team: Coming of age. The University of Texas at Austin, Department of Computer Sciences, AI Laboratory, Tech. Rep. UT-AI-TR-04-313 (October 2004)Google Scholar
  15. 15.
    Bunting, J., Chalup, S., Freeston, M., McMahan, W., Middleton, R., Murch, C., Quinlan, M., Seysener, C., Shanks, G.: Return of the NUbots! the 2003 NUbots team report (2003), http://robots.newcastle.edu.au/publications/NUbotFinalReport2003.pdf
  16. 16.
    Mitsunaga, N., Toichi, H., Izumi, T., Asada, M.: Babytigers 2003: Osaka legged robot team (2003), http://www.er.ams.eng.osaka-u.ac.jp/robocup/BabyTigers/BabyTigers-TechReport-2003.pdf
  17. 17.
    Roefer, T., et al.: German team: Robocup 2004 (2004), http://www.germanteam.org/GT2004.pdf

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Daniel Stronger
    • 1
  • Peter Stone
    • 1
  1. 1.Department of Computer Sciences, The University of Texas at Austin 

Personalised recommendations