Attention Improves the Recognition Reliability of Backpropagation Network

  • Zbigniew Mikrut
  • Agata Piaskowska
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4029)


In the paper a method is presented for improving the recognition reliability of backpropagation-type networks, based on the attention shifting technique. The mechanism is turned on in cases when the reliability of the network’s answer is low. The signals reaching the hidden layer are used for selection of image areas which are the most ”doubtful” in the process of recognition by the network. Three methods have been proposed for appending the input vector after shifting the area where the attention is focused. The methods have been tested in the problem of hand-written digits recognition. Noticeable improvement of the recognition reliability has been obtained.


Hide Layer Recognition Rate Bicubic Interpolation Handwritten Digit Recognition Recognition Reliability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Itti, L., Koch, C.: Computational Modeling of Visual Attention. Nature Rev. Neuroscience 2, 194–203 (2001)CrossRefGoogle Scholar
  2. 2.
    Graboi, D., Lisman, J.: Recognition by top-down and bottom-up processing in cortex: the control of selective attention. J. Neurophysiol. 90, 798–810 (2003)CrossRefGoogle Scholar
  3. 3.
    Alpaydın, E., Salah, A.A., Akarun, L.: A Selective Attention Based Method for Visual Pattern Recognition with Application to Handwritten Digit Recognition and Face Recognition. IEEE Trans. PAMI 24, 420–425 (2002)Google Scholar
  4. 4.
    Choi, S., Ban, S., Lee, M.: Biologically Motivated Attention System Using Bottom-up Saliency Map and Top-down Inhibition. Neural Information Processing Letters and Reviews 22 (2004)Google Scholar
  5. 5.
    Spratling, M.W., Johnson, M.H.: Pre-integration lateral inhibition enhances unsupervised learning. Neural Computation 14, 2157–2179 (2002)zbMATHCrossRefGoogle Scholar
  6. 6.
    Schwartz, E.L.: Spatial mapping in the primate sensory projection: analytic structure and the relevance to perception. Biological Cybernetics 25, 181–194 (1977)CrossRefGoogle Scholar
  7. 7.
    Weiman, C.F.R.: Tracking Algorithms Using Log-Polar Mapped Image Coordinates. SPIE Intelligent Robots and Computer Vision VIII 1192 (1989)Google Scholar
  8. 8.
    Sandini, G., Tistarelli, M.: Vision and Space-Variant Sensing. In: Wechsler, H. (ed.) Neural Networks for Perception, Academic Press, San Diego (1992)Google Scholar
  9. 9.
    Mikrut, Z., Augustyn, G.: Influence of the object representation on the results of character recognition in the car’s license plates. In: Proc. 5th Conf. on Neural Networks and Soft Computing, Zakopane, Poland, pp. 241–246 (2000)Google Scholar
  10. 10.
    Mikrut, Z.: Recognition of objects normalized in Log-polar space using Kohonen networks. In: Proc. 2nd Int. Symp. on Image and Signal Processing and Analysis, Pula, Croatia, pp. 308–312 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Zbigniew Mikrut
    • 1
  • Agata Piaskowska
    • 1
  1. 1.Institute of AutomaticsAGH University of Science and TechnologyKrakowPoland

Personalised recommendations