Advertisement

Learning in Computer Vision: Some Thoughts

  • Maria Petrou
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4756)

Abstract

It is argued that the ability to generalise is the most important characteristic of learning and that generalisation may be achieved only if pattern recognition systems learn the rules of meta-knowledge rather than the labels of objects. A structure, called “tower of knowledge”, according to which knowledge may be organised, is proposed. A scheme of interpreting scenes using the tower of knowledge and aspects of utility theory is also proposed. Finally, it is argued that globally consistent solutions of labellings are neither possible, nor desirable for an artificial cognitive system.

Keywords

Computer Vision Utility Theory Primary Visual Cortex Causal Dependance Markov Random 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Cortes, C., Vapnik, V.N.: Support-Vector Networks. Machine Learning Journal 20, 273–297 (1995)zbMATHGoogle Scholar
  2. 2.
    Christmas, W.J., Kittler, J., Petrou, M.: Structural matching in Computer Vision using Probabilistic Relaxation. IEEE Transactions on Pattern Analysis and Machine Intelligence 17, 749–764 (1995)CrossRefGoogle Scholar
  3. 3.
    Devijver, P.A., Kittler, J.: On the edited nearest neighbour rule. In: Proc. 5th Int. Conf. on Pattern Recognition, pp. 72–80 (1980)Google Scholar
  4. 4.
    Guzman, A.: Computer Recognition of three-dimensional objects in a visual scene. Tech. Rep. MAC-TR-59, AI Laboratory, MIT (1968)Google Scholar
  5. 5.
    Heesch, D., Petrou, M.: Non-Gibbsian Markov Random Fields for object recognition. The British Machine Vision Conference (submitted, 2007)Google Scholar
  6. 6.
    Huffman, D.A.: Impossible Objects as Nonsense Sentences. Machine Intelligence 6, 295–323 (1971)Google Scholar
  7. 7.
    Hummel, R.A., Zucker, S.W.: One the foundations of relaxation labelling process. IEEE Transactions PAMI 5, 267–287 (1983)zbMATHGoogle Scholar
  8. 8.
    Kindermann, R., Snell, J.L.: Markov Random Fields and their Applications. First book of the AMS soft-cover series in Contemporary Mathematics, American Mathematical Society (1980)Google Scholar
  9. 9.
    Li, Z.: A neural model of contour integration in the primary visual cortex. Neural Computation 10, 903–940 (1998)CrossRefGoogle Scholar
  10. 10.
    Li, Z.: Visual segmentation by contextual influences via intra-cortical interactions in the primary visual cortex. Networks:Computation in Neural Systems 10, 187–212Google Scholar
  11. 11.
    Li, Z.: Computational design and nonlinear dynamics of a recurrent network model of the primary visual cortex. Neural Computation 13, 1749–1780 (2001)zbMATHCrossRefGoogle Scholar
  12. 12.
    Lindley, D.V.: Making Decisions. John Wiley, Chichester (1985)Google Scholar
  13. 13.
    Marengoni, M.: Bayesian Networks and Utility Theory for the management of uncertainty and control of algorithms in vision systems. PhD thesis, University of Massachusetts (2002)Google Scholar
  14. 14.
    Miller, E.G., Matsakis, N.E., Viola, P.A.: Learning from one example through shared densities on transforms. In: CVPR (2000)Google Scholar
  15. 15.
    Nagel, E., Newman, J.R.: Gödel’s Proof. Routledge and Kegan Paul (1959) ISBN: 0710070780Google Scholar
  16. 16.
    Pearl, J.: Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann Publishers Inc., San Francisco (1988)Google Scholar
  17. 17.
    Petrou, M., Tabacchi, M., Piroddi, R.: Networks of ideas and concepts. IEEE Transactions of Man Machine and Cybernetics (submitted, 2007)Google Scholar
  18. 18.
    Petrou, M., Garcia Sevilla, P.: Image Processing, Dealing with Texture. Wiley, Chichester (2006)Google Scholar
  19. 19.
    Schlesinger, B.D., Hlavac, V.: Ten lectures on Statistical and Structural Pattern Recognition, ch. 10. Kluwer Academic Publishers, Dordrecht, The Netherlands (2002)Google Scholar
  20. 20.
    Stassopoulou, A., Petrou, M.: Obtaining the correspondence between Bayesian and Neural Networks. International Journal of Pattern Recognition and Artificial Intelligence 12, 901–920 (1998)CrossRefGoogle Scholar
  21. 21.
    Stoddart, A.J., Petrou, M., Kittler, J.: On the foundations of Probabilistic Relaxation with product support. Journal of Mathematical Imaging and Vision 9, 29–48 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  22. 22.
    Tenenbaum, J.B., Griffiths, T.L., Kemp, C.: Theory-based Bayesian models of inductive learning and reasoning. Trends in Cognitive Sciences 10, 309–318 (2006)CrossRefGoogle Scholar
  23. 23.
    Tong, S., Koller, D.: Support Vector Machine active learning with applications to text classification. Journal of Machine Learning Research 2, 45–66 (2001)CrossRefGoogle Scholar
  24. 24.
    Walker, T.C., Miller, R.K.: Expert Systems Handbook: An Assessment of Technology and Applications. Prentice-Hall, Englewood Cliffs (1990)Google Scholar
  25. 25.
    Waltz, D.: Understanding line drawings of scenes with shadows. In: Winstone, P. (ed.) The Psychology of Computer Vision, pp. 19–91. McGraw-Hill, New York (1975), http://www.rci.rutgers.edu/~cfs/305_html/Gestalt/Waltz2.html Google Scholar
  26. 26.
    Winston, P.H.: Learning structural descriptions from examples. The psychology of computer vision, 157–209 (1975)Google Scholar
  27. 27.
    Zadeh, L.H.: A fuzzy algorithmic approach to the definition of complex or imprecise concepts. Int. J. Man-Machine Studies 8, 249–291 (1976)zbMATHMathSciNetCrossRefGoogle Scholar
  28. 28.

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Maria Petrou
    • 1
  1. 1.Communications and Signal Processing Group, Electrical and Electronic Engineering Department, Imperial College, London SW7 2AZUK

Personalised recommendations