Equivalent TLU- and ∑П-Networks for Invariant Pattern Recognition

  • Helmut Glünder

Abstract

Two universal types of networks for the invariant recognition of pictorial patterns are compared with respect to function, structure and costs. The main stage of both networks serves for the extraction of features that are invariant under certain types of unrestricted geometric transformations, e.g. rigid translations. Both approaches are conceptualized for unequivocal class definitions and thus for the feasibility of perfect pattern reconstructions. Although the networks are structurally different, they are to a high degree functionally equivalent. The costs, i.e., the number of weights per class that must be adjusted in order to obtain ideal and invariant classification, turn out to be almost the same for both approaches as well as for the reference network (list classifier). In practice, however, the ∑П-network is superior to the TLU-network; it is more robust and even single invariant features are unequivocally defined. The investigations reported here do not concern any aspects of learning.

Keywords

Invariant Feature Mask Type Trilinear Term List Classifier Invariant Classification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Brousil, J.K. and Smith, D.R. (1967) A threshold logic network for shape invariance. IEEE Trans. Electronic Computers, EC-16, 818–828Google Scholar
  2. Casasent, D. and Psaltis, D. (1976) Position, rotation and scale invariant optical correlation. Applied Optics, 15, 1795–1799CrossRefGoogle Scholar
  3. Durbin, R. and Rumelhart, D.E. (1989) Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Neural Computation, 1, 133–142CrossRefGoogle Scholar
  4. Giles, C.L. and Maxwell, T. (1987) Learning, invariances, and generalization in high-order neural networks. Applied Optics, 26, 4972–4978CrossRefGoogle Scholar
  5. Glünder, H. (1986) Neural computation of inner geometric pattern relations. Biol. Cybern., 55, 239–251CrossRefMATHGoogle Scholar
  6. Glünder, H. (1987) Invariant description of pictorial patterns via generalized autocorrelation functions. In MeyerEbrecht, D. (ed.), ASST ‘87, Springer Verlag, Berlin, 84–87Google Scholar
  7. Lippmann, R.P. (1987) An introduction to computing with neural nets. IEEE ASSP Magazine, 4, 4–22Google Scholar
  8. Lohmann, A.W. and Wirnitzer, B. (1984) Triple correlations. Proc. IEEE, 72, 889–901CrossRefGoogle Scholar
  9. McLaughlin, J.A. and Raviv, J. (1968) Nth-order autocorrelations in pattern recognition. Information and Control, 12, 121–142CrossRefMATHGoogle Scholar
  10. Pitts, W. and McCulloch W.S. (1947) How do we know universals. The perception of auditory and visual forms. Bull. Math. Biophys., 9, 127–147CrossRefGoogle Scholar
  11. Rosenblatt, F. (1962) Principles of Neurodynamics — Perceptrons and the Theory of Brain Mechanisms, Spartan Books, Washington/DCMATHGoogle Scholar
  12. Rumelhart, D.E.; Hinton, G.E. and McClelland, J.L. (1986) A general framework for parallel distributed processing. In Rumelhart, D.E. and McClelland, J.L. (eds.), Parallel Distributed Processing 1, The MIT Press, Cambridge/MA, 45–76Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1990

Authors and Affiliations

  • Helmut Glünder
    • 1
  1. 1.Institut für Medizinische PsychologieLudwig-Maximilians-UniversitätMünchen 2Germany

Personalised recommendations