Robots’ Vision Humanization Through Machine-Learning Based Artificial Visual Attention

  • Kurosh MadaniEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1055)


If the main challenge of robotics during the industrial air of 19th century has consisted of automating repetitive tasks and the sophistication of these machines through digitization of these robots throughout the 20th century, the challenge of robotics in the current century will be to make cohabit humans and robots in the same living space. However, robots would not succeed in seamlessly integrate the humans’ universe without developing the ability of perceiving similarly to humans the environment that they are supposed to share with them. In such a context, fitting the skills of the natural vision is an appealing perspective for autonomous robotics dealing with and prospecting Human-Robot interaction. The main goal of the present article is to debate the plausibility and the reality of humanizing the robots behavior focusing the perception of the surrounding environment. An implementation of the developed concept on a real humanoid robot nourishes the presented results and the related discussions.


Autonomous robot Artificial visual attention Salient objects’ extraction Real-time Implementation 


  1. 1.
    Panerai, F., Metta, G., Sandini, G.: Learning visual stabilization reflexes in robots with moving eyes. Neurocomputing 48(1–4), 323–337 (2002)CrossRefGoogle Scholar
  2. 2.
    Chella, A., Macaluso, I.: The perception loop in CiceRobot, a museum guide robot. Neurocomputing 72(4–6), 760–766 (2009)CrossRefGoogle Scholar
  3. 3.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA, pp. 1597–1604 (2009)Google Scholar
  4. 4.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intel 20, 1254–1259 (1998)CrossRefGoogle Scholar
  5. 5.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems, vol. 19, pp. 545–552 (2007). ISBN: 9780262195683Google Scholar
  6. 6.
    Achanta, R., Estrada, F., Wils, P., Susstrunk, S.: Salient region detection and segmentation. In: Proceedings of International Conference on Computer Vision Systems. LNCS, vol. 5008, pp. 66–75. Springer, Heidelberg (2008)Google Scholar
  7. 7.
    Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.-Y.: Learning to detect a salient object. IEEE Trans. Pattern Anal. Mach. Intell. 33(2), 353–367 (2001)Google Scholar
  8. 8.
    Liang, Z., Chi, Z., Fu, H., Feng, D.: Salient object detection using content-sensitive hypergraph representation and partitioning. Pattern Rec. 45(11), 3886–3901 (2012)CrossRefGoogle Scholar
  9. 9.
    Ramik, D. M., Sabourin, C., Madani, K.: Hybrid salient object extraction approach with automatic estimation of visual attention scale. In: Proceedings of IEEE SITIS, Dijon, France, pp. 438–445 (2011)Google Scholar
  10. 10.
    Ramik, D.M., Sabourin, C., Moreno, R., Madani, K.: A machine learning based intelligent vision system for autonomous object detection and recognition. J. Appl. Intell. 40(2), 358–375 (2014)CrossRefGoogle Scholar
  11. 11.
    Moreno, R., Ramik, D.M., Graña, M., Madani, K.: Image segmentation on the spherical coordinate representation of the RGB color space. IET Image Proc. 6(9), 1275–1283 (2012)CrossRefGoogle Scholar
  12. 12.
    Borji, A., Itti, L.: State-of-the-Art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 185–207 (2013)CrossRefGoogle Scholar
  13. 13.
    Navalpakkam, V., Itti L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: Proceedings of IEEE CVPR, vol. II, New York, NJ, USA, pp. 2049–2056 (2006)Google Scholar
  14. 14.
    Holzbach, A., Cheng, G.: A scalable and efficient method for salient region detection using sampled template collation. In: Proceedings of IEEE ICIP, Paris, France, pp. 1110–1114 (2014)Google Scholar
  15. 15.
    Koehler, K., Guo, F., Zhang, S., Eckstein, M.P.: What do saliency models predict? J. Vis. 14(3), 1–27 (2014)CrossRefGoogle Scholar
  16. 16.
    Rajashekar, U., Vander Linde, I., Bovik, A.C., Cormack, L.K.: GAFFE: a gaze- attentive fixation finding engine. IEEE Trans. Image Process. 17(4), 564–573 (2008)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Kadir, T., Brady, M.: Saliency, scale and image description. J. Vis. 45(2), 83–105 (2011)zbMATHGoogle Scholar
  18. 18.
    Kienzle, W., Franz, M.O., Schölkopf, B., Wichmann, F.A.: Center-surround patterns emerge as optimal predictors for human saccade targets. J. Vis. 9, 1–15 (2009)CrossRefGoogle Scholar
  19. 19.
    Zhang, J., Sclaroff, S.: Saliency detection: a boolean map approach. In: Proceedings of IEEE ICCV, Sydney, Australia, pp. 153–160 (2013)Google Scholar
  20. 20.
    Madani, K., Kachurka, V., Sabourin, C., Amarger, V., Golovko, V., Rossi, L.: A human-like visual-attention-based artificial vision system for wildland firefighting assistance. J. Appl. Intell. 48(8), 2157–2179 (2018)CrossRefGoogle Scholar
  21. 21.
    Madani, K., Kachurka, V., Sabourin, C., Golovko, V.: A soft-computing-based approach to artificial visual attention using human eye-fixation paradigm: toward a human-like skill in robot vision. Soft. Comput. (2018). Scholar
  22. 22.
    Ramik, D.M.: Contribution to complex visual information processing and autonomous knowledge extraction: application to autonomous robotics. Ph.D. dissertation, University Paris-Est, 2012, Pub. No. 2012PEST1100Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Université Paris-Est, Signals, Images, and Intelligent Systems Laboratory (LISSI/EA 3956), University Paris Est Creteil (UPEC), Senart-FB Institute of TechnologyLieusaintFrance

Personalised recommendations