Skip to main content
Log in

Visual attention servo control for task-specific robotic applications

  • Robotics and Automation
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

This paper proposes a visual attention servo control (VASC) method which uses the Gaussian mixture model (GMM) for task-specific applications of mobile robots. In particular, low dimensional bias feature template is obtained using GMM to get an efficient attention process. An image-based visual servo (IBVS) controller is used to search for a desired object in a scene through an attention system which forms a task-specific state representation of the environment. First, task definition and object representation in semantic memory (SM) are proposed, and bias feature template is obtained using GMM deduction for features from high dimension to low dimension. Second, the features intensity, color, size and orientation are extracted to build the feature set. Mean shift method is used to segment the visual scene into discrete proto-objects. Given a task-specific object, top-down bias attention is evaluated to generate the saliency map by combining with the bottom-up saliency-based attention. Third, a visual attention servo controller is developed to integrate the IBVS controller and the attention system for robotic cognitive control. A rule-based arbitrator is proposed to switch between the episodic memory (EM)-based controller and the IBVS controller depending on whether the robot obtains the desired attention point on the image. Finally, the proposed method is evaluated on task-specific object detection under different conditions and visual attention servo tasks. The obtained results validate the applicability and usefulness of the developed method for robotics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. M. Begum and F. Karray, “Visual attention for robotic cognition: a survey,” IEEE Trans. on Autonomous Mental Development, vol. 3, no. 1, pp. 92–105, 2011.

    Article  Google Scholar 

  2. S. Kastner and L. G. Ungerleider, “The neural basis of biased competition in human visual cortex,” Neuropsychologia, vol. 39, pp. 1263–1276, 2001.

    Article  Google Scholar 

  3. J. M. Henderson, J. R. Brockmole, M. S. Castelhano, and M. Mack, “Visual saliency does not account for eye movements during visual search in real-world scenes,” Eye Movements: A Window on Mind and Brain, pp. 537–562, 2007.

    Chapter  Google Scholar 

  4. B. J. Scholl, “Objects and attention: the state of art,” Cognition, vol. 80, no. 1/2, pp. 1–46, 2001.

    Article  Google Scholar 

  5. K. R. Cave, “The feature gate model of visual selection,” Psychological Research, vol. 62, pp. 182–194, 1999.

    Article  Google Scholar 

  6. L. Itti and C. Koch, “A saliency based search mechanism for overt and covert shift of visual attention,” Vision Research, vol. 40, pp. 1489–1506, 2000.

    Article  Google Scholar 

  7. F. Orabona, G. Metta, and G. Sandini, “Objectbased visual attention: A model for a behaving robot,” Proc. of the IEEE Int. Conf. on Computer Vision and Pattern Recognition, pp. 89–97, 2005.

    Google Scholar 

  8. T. Wu, J. Gao, and Q. Zhao, “A computational model of object-based selective visual attention mechanism in visual information acquisition,” Proc. of the IEEE Int. Conf. on Information and Acquisition, pp. 405–409, 2004.

    Google Scholar 

  9. S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Trans. On Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996.

    Article  Google Scholar 

  10. F. Chaumette and S. Hutchinson, “Visual servo control part I: Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006.

    Article  Google Scholar 

  11. F. Chaumette and S. Hutchinson, “Visual servo control part II: Advanced approaches,” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109–118, 2007.

    Article  Google Scholar 

  12. Y. Wang, H. Lang, and C.W. de Silva, “A hybrid visual servoing controller for robust manipulation using mobile robots,” IEEE Trans. on Mechatronics, vol. 15, no. 5, pp. 757–769, 2010.

    Article  Google Scholar 

  13. A. Borji, M. N. Ahmadabadi, B. N. Araabi, and M. Hamidi, “Online learning of task-driven object-based visual attention control,” Image and Vision Computing, vol. 28, pp. 1130–1145, 2010.

    Article  Google Scholar 

  14. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998.

    Article  Google Scholar 

  15. R. Achanta, S. Hemami, F. Estrada, and S. Süsstrunk, “frequency-tuned salient region detection,” Proc. of the IEEE Int. Conf. on Computer Vision and Pattern Recognition, pp. 1597–1604, 2009.

    Google Scholar 

  16. J. M. Wolfe, “Guided search 2.0: A revised model of visual search,” Psychonomic Bulletin and Review, vol. 1, no. 2, pp. 202–238, 1994.

    Article  Google Scholar 

  17. V. Navalpakkam and L. Itti, “Top-down attention selection is fine grained,” Journal of Vision, vol. 6, pp. 1180–1193, 2006.

    Article  Google Scholar 

  18. S. Frintrop, “VOCUS: a visual attention system for object detection and goal-directed search,” Lecture Notes in Artificial Intelligence, vol. 3899, pp. 1–5, 2006.

    Google Scholar 

  19. Y. Kim, M. V. Velsen, and R. W. Hill, “Modeling dynamic perceptual attention in complex virtual environments,” Proc. of the 5th Int. Working Conf. on Intelligent Virtual Agents, pp. 266–277, 2005.

    Chapter  Google Scholar 

  20. M. Begum, F. Karray, G. K. I. Mann, and R. G. Gosine, “A probabilistic model of overt visual attention for cognitive robots,” IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 40, no. 5, pp. 1305–1318, 2010.

    Article  Google Scholar 

  21. Y. L. Yu, G. K. I. Mann, and R. G. Gosine, “An object-based visual attention model for robotic applications” IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 40, no. 5, pp. 1398–1411, 2010.

    Article  Google Scholar 

  22. A. D. Luca, G. Oriolo, and P. R. Giordano, “Imagebased visual servoing schemes for nonholonomic mobile manipulators,” Robotica, vol. 25, no. 2, pp. 131–145, 2007.

    Google Scholar 

  23. N. Mansard, O. Stasse, F. Chaumette, and K. Yokoi, “Visually-guided grasping while walking on a humanoid robot,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 3041–3047, 2007.

    Google Scholar 

  24. Y. Endo and R. C. Arkin, “Anticipatory robot navigation by simultaneously localizing and building a cognitive map,” Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pp. 460–466, 2003.

    Google Scholar 

  25. Y. Endo, “Anticipatory robot control for a partially observable environment using episodic memories,” Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pp. 2852–2859, 2008.

    Google Scholar 

  26. G. McLachlan and D. Peel, Finite Mixture Models, Wiley, New York, 2000.

    Book  MATH  Google Scholar 

  27. A. Dempster and N. L. D. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society B, vol. 39, no. 1, pp. 1–38, 1977.

    MathSciNet  MATH  Google Scholar 

  28. S. Arivazhagan, L. Ganesan, and S. P. Priyal, “Texture classification using Gabor wavelets based rotation invariant features,” Pattern Recognition Letters, vol. 27, no. 6, pp. 1976–1982, 2006.

    Article  Google Scholar 

  29. C. Christoudias, B. Georgescu, and P. Meer, “Synergism in low level vision,” Proc. of the IEEE Int. Conf. on Pattern Recognition, pp. 150–155, 2002.

    Google Scholar 

  30. D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603–619, 2002.

    Article  Google Scholar 

  31. M. W. Spong, S. Hutchinson, and M. Vidyadagar, Robot Modeling and Control, Wiley, New York, 2006.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dong Liu.

Additional information

Recommended by Editorial Board member Dong-Joong Kang under the direction of Editor Zengqi Sun.

This work was supported by National High Technology Research and Development Program 863 (2013AA040303), and in part by Natural Sciences and Engineering Research Council (NSERC) of Canada. The authors gratefully acknowledge financial support from China Scholarship Council.

Dong Liu received his M.S. from Dalian University of Technology, China, in 2010. He is currently pursuing a Ph.D. in Mechatronic Engineering at Dalian University of Technology. Mr. Liu was a visiting scholar in mechanical engineering within the Industrial Automation Laboratory, at University of British Columbia, Canada, from 2011 to 2012. His research interests include intelligent robotics, cognitive control, and computer vision.

Ming Cong received his Ph.D. from Shanghai Jiao Tong University, China, in 1995. Since 2003, he joined the faculty of the School of Mechanical Engineering at Dalian University of Technology, China. Dr. Cong was an Outstanding Expert enjoying special government allowances approved by the State Council, an advanced worker of Intelligent Robot theme in the field of automation by National High Technology Research and Development Program (863), and a member of industrial robot expert group of the fifth intelligent robot theme for the 863. His research interests include robotics and automation, intelligent control, and biomimetic robots.

Yu Du received her M.S. from Dalian University of Technology, China, in 2007. She then joined the SIASUN Robot and Automation Co. Ltd, China. Mrs. Du is currently pursuing a Ph.D. in Mechanical Engineering at University of British Columbia, Canada. Her main research interests include robotics and automation, intelligent control.

Yun-Fei Zhang received his M.S. from Shanghai Jiao Tong University, China, in 2010. He is currently pursuing a Ph.D. in Mechanical Engineering at University of British Columbia, Canada. His main research interests include robot learning, navigation, intelligent control and sensor fusion.

Clarence W. de Silva received Ph.D. degrees from Massachusetts Institute of Technology, U.S.A., in 1978, and University of Cambridge, U.K., in 1998, and honorary D.Eng. from University of Waterloo, Canada, in 2008. He joined the faculty of the Department of Mechanical Engineering in 1988, as NSERC-BC Packers Chair Professor of Industrial Automation, at the University of British Columbia, Canada. Dr. de Silva currently occupies the Senior Canada Research Chair Professorship in Mechatronics & Industrial Automation. His research interests include intelligent control, robotics and applications and process automation.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liu, D., Cong, M., Du, Y. et al. Visual attention servo control for task-specific robotic applications. Int. J. Control Autom. Syst. 11, 1241–1252 (2013). https://doi.org/10.1007/s12555-012-9505-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-012-9505-6

Keywords

Navigation