Advertisement

Visual Mobile Robots Perception for Motion Control

Chapter
Part of the Intelligent Systems Reference Library book series (ISRL, volume 29)

Abstract

Visual perception methods are developed first mainly for human perception description and understanding. The results of these researches are now very popular for robots visual perception modeling. In this chapter is present first a brief review of the basic visual perception methods suitable for intelligent mobile robots applications. The analysis of these methods is directed to the mobile robot motion control, where the visual perception is used for objects or human body localization like: Bayesian visual perception methods for localization; log-polar visual perception; area of robot observation mapping using visual perception; landmark-based finding and localization with visual perception etc. The development of an algorithm for mobile robot visual perception is proposed based on the features of log-polar transformation to represent some of the objects and scene fragments in area of mobile robot area of observation in a more simple form for the image processing. The features and advantages of the proposed algorithm are demonstrated with the popular for the mobile robots visual perception situation of motion control in a road or corridor with outdoor road edges, painted lane separation lines or indoor two side existing room or corridor lines. The proposed algorithm is tested with suitable simulations and the experiments with real mobile robots like Pioneer 3-DX (Mobil Robots INC), WiFiBot and Lego Robot Mindstorms NXT. The results are summarized and presented as graphics, test images and comparative tables in the conclusion.

Keywords

Visual perception intelligent robots visual mobile robots motion control visual tracking visual navigation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bigun, J.: Vision with direction. Springer, Heidelberg (2006)Google Scholar
  2. 2.
    Ahle, E., Söffker, D.: A cognitive-oriented architecture to realize autonomous behavior – part II: application to mobile robots. In: Proc. 2006 IEEE Conf. on Systems, Man, and Cybernetics, Taipei, Taiwan, October 8-11, pp. 2221–2227 (2006)Google Scholar
  3. 3.
    Ciftcioglu, Ö., Bittermann, M.S., Sariyildiz, I.S.: Towards computer-based perception by modeling visual perception: a probabilistic theory. In: Proc. 2006 IEEE Int. Conf. on Systems, Man, and Cybernetics, Taipei, Taiwan, October 8-11, pp. 5152–5159 (2006)Google Scholar
  4. 4.
    Bundsen, C.: A theory of visual attention. Psychological Review 97(4), 523–547 (1990)CrossRefGoogle Scholar
  5. 5.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  6. 6.
    Oriolio, G., Ulivi, G., Vendittelli, M.: Real-time map building and navigation for autonomous robots in unknown environments. IEEE Trans. on Systems, Man and Cybernetics – Part B: Cybernetics 28(3), 316–333 (1998)CrossRefGoogle Scholar
  7. 7.
    Foster, J.: The Nature of Perception. Oxford University Press (2000)Google Scholar
  8. 8.
    Bertero, M., Poggio, T.A., Torre, V.: Ill-posed problems in early vision. Proceedings of the IEEE  76(8), 869–889 (1988)Google Scholar
  9. 9.
    Hecht-Nielsen, R.: The mechanism of thought Proc. IEEE World Congress on Computational Intelligence WCCI 2006. Int. Joint Conf. on Neural Networks, Vancouver, Canada, July 16-21, pp. 1146–1153 (2006)Google Scholar
  10. 10.
    Bundsen, C.: A theory of visual attention. Psychological Review 97(4), 23–547 (1990)Google Scholar
  11. 11.
    Ciftcioglu, Ö., Bittermann, M.S., Sariyildiz, I.S.: Autonomous robotics by perception. In: Proc. ISCIS & ISIS 2006, Joint 3rd Int. Conf. on Soft Computing and Intelligent Systems and 7th Int. Symp. on Advanced Intelligent Systems, Tokyo, Japan, September 20-24, pp. 1963–1970 (2006)Google Scholar
  12. 12.
    Eckmiller, R., Baruth, O., Neumann, D.: On human factors for interactive man-machine vision: requirements of the neural visual system to transform objects into percepts. In: Proc. IEEE World Congress on Computational Intelligence WCCI 2006 - Int. Joint Conf. on Neural Networks, Vancouver, Canada, July 16-21, pp. 699–703 (2006)Google Scholar
  13. 13.
    Plumert, J.M., Kearney, J.K., Cremer, J.F., Recker, K.: Distance perception in real and virtual environments. ACM Trans. Appl. Percept. 2(3), 216–233 (2005)CrossRefGoogle Scholar
  14. 14.
    Beetz, M., Arbuckle, T., Belker, T., Cremers, A.B., Schulz, D., Bennewitz, M., Burgard, W., Hähnel, D., Fox, D., Grosskreutz, H.: Integrated, plan-based control of autonomous robots in human environments. IEEE Intelligent Systems 16(5), 56–65 (2001)Google Scholar
  15. 15.
    Hachour, O.: Path planning of Autonomous Mobile robot. International Journal of Systems Applications, Engineering & Development 2(4), 178–190 (2008)Google Scholar
  16. 16.
    Wang, M., Liu, J.N.K.: Online path searching for autonomous robot navigation. In: Proc. IEEE Conf. on Robotics, Automation and Mechatronics, Singapore, December 1-3, vol. 2, pp. 746–751 (2004)Google Scholar
  17. 17.
    Bekiarski, A., Pleshkova-Bekiarska, S.: Visual Design of Mobile Robot Audio and Video System in 2D Space of Observation. In: International Conference on Communications, Electromagnetic and Medical applications (CEMA), Athens, vol. 6-9 XI, pp. 14–18 (2008)Google Scholar
  18. 18.
    Bekiarski, A., Pleshkova-Bekiarska, S.: Neural Network for Audio Visual Moving Robot Tracking to Speaking Person. In: 10th WSEAS Neural Network, Praha, pp. 92–95 (2009)Google Scholar
  19. 19.
    Bekiarski, A.: Audio Visual System with Cascade-Correlation Neural Network for Moving Audio Visual Robot. In: 10th WSEAS Neural Network, Praha, pp. 96–99 (2009)Google Scholar
  20. 20.
    Bekiarski, A., Pleshkova-Bekiarska, S.: Simulation of Audio Visual Robot Perception of Speech Signals and Visual Information. In: International Conference on Communications, Electromagnetic and Medical applications (CEMA), vol. 6-9 XI, pp. 19–24 (2008)Google Scholar
  21. 21.
    Ahle, E., Söffker, D.: A cognitive-oriented architecture to realize autonomous behavior – part I: theoretical background. In: Proc. 2006 IEEE Conf. on Systems, Man, and Cybernetics, Taipei, Taiwan, October 8-11, pp. 2215–2220 (2006)Google Scholar
  22. 22.
    Ahle, E., Söffker, D.: A cognitive-oriented architecture to realize autonomous behavior – part II: application to mobile robots. In: Proc. 2006 IEEE Conf. on Systems, Man, and Cybernetics, Taipei, Taiwan, October 8-11, pp. 2221–2227 (2006)Google Scholar
  23. 23.
    Adams, B., Breazeal, C., Brooks, R.A., Scassellati, B.: Humanoid robots: a new kind of tool. IEEE Intelligent Systems and Their Applications 15(4), 25–31 (2000)CrossRefGoogle Scholar
  24. 24.
    Zitova, B., Flusser, J.: Image registration methods: A survey. IVC 21(11), 977–1000 (2003)Google Scholar
  25. 25.
    Traver, V.J., Pla, F.: The log-polar image representation in pattern recognition tasks. In: Proceedings of Pattern Recognition and Image Analysis, vol. 2652, pp. 1032–1040 (2003)Google Scholar
  26. 26.
    Zokai, S., Wolberg, G.: Image registration using log-polar mappings for recovery of large-scale similarity and projective transformations. IEEE Transactions on Image Processing 14, 1422–1434 (2005)CrossRefMathSciNetGoogle Scholar
  27. 27.
    Luengo-Oroz, M.A., Angulo, J., Flandrin, G., Klossa, J.: Mathematical Morphology in Polar-Logarithmic Coordinates. Application to Erythrocyte Shape Analysis. In: Marques, J.S., Pérez de la Blanca, N., Pina, P. (eds.) IbPRIA 2005. LNCS, vol. 3523, pp. 199–206. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  28. 28.
    Jain, R., Bartlett, S.L., O’Brien, N.: Motion stereo using ego-motion complex logarithmic mapping. IEEE Trans. Pattern Analys. Machine Intell. 9(3) (May 1987)Google Scholar
  29. 29.
    Massimo, T., Sandini, G.: On the advantages of the polar and log-polar mapping for direct estimation of time-to-impact from optical flow. IEEE Trans. Pattern Analys. Machine Intell. 15(4) (April 1993)Google Scholar
  30. 30.
    Schwartz, E.L.: Computational anatomy and functional architecture of the striate cortex: a spatial mapping approach to perceptual coding. Vision Res. 20, 645–669 (1980)CrossRefGoogle Scholar
  31. 31.
    Schwartz, E.L.: Spatial mapping in the primate sensory projection: Analytic structure and relevance to perception. Biological Cybernetics 25, 181–194 (1977)CrossRefGoogle Scholar
  32. 32.
    Shah, S., Levine, M.D.: Visual information processing in primate cone pathways. I. A model. IEEE Transactions on Systems, Man and Cybernetics, Part B 26, 259–274 (1996)CrossRefGoogle Scholar
  33. 33.
  34. 34.
    TMS320C6416T, D.S.P.: Starter Kit (Starter Kits), http://focus.ti.com/dsp/
  35. 35.
    Microsoft Robotic Studio (2008), http://msdn.microsoft.com/en-us/robotics/
  36. 36.
    MobileSim & MobileEyes, http://www.mobilerobots.com/
  37. 37.
    Pardo, F., Dierickx, B., Scheffer, D.: Space-Variant Non-Orthogonal Structure CMOS Image Sensor Design. IEEE Journal of Solid State Circuits 33(6), 842–849 (1998)CrossRefGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Department of Radio Communications and Video TechnologiesTechnical University of SofiaSofiaBulgaria

Personalised recommendations