Equivalent TLU- and ∑П-Networks for Invariant Pattern Recognition
Two universal types of networks for the invariant recognition of pictorial patterns are compared with respect to function, structure and costs. The main stage of both networks serves for the extraction of features that are invariant under certain types of unrestricted geometric transformations, e.g. rigid translations. Both approaches are conceptualized for unequivocal class definitions and thus for the feasibility of perfect pattern reconstructions. Although the networks are structurally different, they are to a high degree functionally equivalent. The costs, i.e., the number of weights per class that must be adjusted in order to obtain ideal and invariant classification, turn out to be almost the same for both approaches as well as for the reference network (list classifier). In practice, however, the ∑П-network is superior to the TLU-network; it is more robust and even single invariant features are unequivocally defined. The investigations reported here do not concern any aspects of learning.
KeywordsInvariant Feature Mask Type Trilinear Term List Classifier Invariant Classification
Unable to display preview. Download preview PDF.
- Brousil, J.K. and Smith, D.R. (1967) A threshold logic network for shape invariance. IEEE Trans. Electronic Computers, EC-16, 818–828Google Scholar
- Glünder, H. (1987) Invariant description of pictorial patterns via generalized autocorrelation functions. In MeyerEbrecht, D. (ed.), ASST ‘87, Springer Verlag, Berlin, 84–87Google Scholar
- Lippmann, R.P. (1987) An introduction to computing with neural nets. IEEE ASSP Magazine, 4, 4–22Google Scholar
- Rumelhart, D.E.; Hinton, G.E. and McClelland, J.L. (1986) A general framework for parallel distributed processing. In Rumelhart, D.E. and McClelland, J.L. (eds.), Parallel Distributed Processing 1, The MIT Press, Cambridge/MA, 45–76Google Scholar