Transformation Invariance in Pattern Recognition — Tangent Distance and Tangent Propagation

  • Patrice Y. Simard
  • Yann A. LeCun
  • John S. Denker
  • Bernard Victorri
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1524)


In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor a.ecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, “tangent distance” and “tangent propagation”, which make use of these invariances to improve performance.


Tangent Vector Tangent Plane Handwritten Digit Distortion Model Local Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    P. A. Devijver and J. Kittler. Pattern Recognition, A Statistical Approache. Prentice Hall, Englewood Cliffs, 1982.Google Scholar
  2. 2.
    A. V. Aho, J. E. Hopcroft, and J. D. Ullman. Data Structure and Algorithms. Addison-Wesley, 1983.Google Scholar
  3. 3.
    L. Bottou and V. N. Vapnik. Local learning algorithms. Neural Computation, 4(6):888–900, 1992.CrossRefGoogle Scholar
  4. 4.
    A. J. Broder. Strategies for e.cient incremental nearest neighbor search. Pattern Recognition, 23:171–178, 1990.CrossRefGoogle Scholar
  5. 5.
    D. S. Broomhead and D. Lowe. Multivariable functional interpolation and adaptive networks. Complex Systems, 2:321–355, 1988.zbMATHMathSciNetGoogle Scholar
  6. 6.
    Y. Choquet-Bruhat, C. DeWitt-Morette, and M. Dillard-Bleick. Analysis, Manifolds and Physics. North-Holland, Amsterdam, Oxford, New York, Tokyo, 1982.zbMATHGoogle Scholar
  7. 7.
    C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273–297, 1995.zbMATHGoogle Scholar
  8. 8.
    B. V. Dasarathy. Nearest Neighbor (NN) Norms: NN Pattern classi.cation Techniques. IEEE Computer Society Press, Los Alamitos, California, 1991.Google Scholar
  9. 9.
    H. Drucker, R. Schapire, and P. Y. Simard. Boosting performance in neural networks. International Journal of Pattern Recognition and Artificial Intelligence, 7, No. 4:705–719, 1993.CrossRefGoogle Scholar
  10. 10.
    K. Fukunaga and T. E. Flick. An optimal global nearest neighbor metric. IEEE transactions on Pattern analysis and Machine Intelligence, 6, No. 3:314–318, 1984.zbMATHCrossRefGoogle Scholar
  11. 11.
    R. Gilmore. Lie Groups, Lie Algebras and some of their Applications. Wiley, New York, 1974.zbMATHGoogle Scholar
  12. 12.
    T. Hastie, E. Kishon, M. Clark, and J. Fan. A model for signature verification. Technical Report 11214-910715-07TM, AT&T Bell Laboratories, July 1991.Google Scholar
  13. 13.
    T. Hastie and P. Y. Simard. Metrics and models for handwritten character recognition. Statistical Science, 13, 1998.Google Scholar
  14. 14.
    T. J. Hastie and R. J. Tibshirani. Generalized Linear Models. Chapman and Hall, London, 1990.Google Scholar
  15. 15.
    G. E. Hinton, C. K. I. Williams, and M. D. Revow. Adaptive elastic models for hand-printed character recognition. In Advances in Neural Information Processing Systems, pages 512–519. Morgan Kaufmann Publishers, 1992.Google Scholar
  16. 16.
    A. E. Hoerl and R. W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12:55–67, 1970.zbMATHCrossRefGoogle Scholar
  17. 17.
    T. Kohonen. Self-organization and associative memory. In Springer Series in Information Sciences, volume 8. Springer-Verlag, 1984.Google Scholar
  18. 18.
    Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Handwritten digit recognition with a back-propagation network. In David Touretzky, editor, Advances in Neural Information Processing Systems, volume 2, (Denver, 1989), 1990. Morgan Kaufman.Google Scholar
  19. 19.
    Y. LeCun. Generalization and networkdesign strategies. In R. Pfeifer, Z. Schreter, F. Fogelman, and L. Steels, editors, Connectionism in Perspective, Zurich, Switzerland, 1989. Elsevier. an extended version was published as a technical report of the University of Toronto.Google Scholar
  20. 20.
    Y. LeCun, L. D. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. Muller, E. Sackinger, P. Simard, and V. Vapnik. Learning algorithms for classification: A comparison on handwritten digit recognition. In J. H. Oh, C. Kwon, and S. Cho, editors, Neural Networks: The Statistical Mechanics Perspective, pages 261–276. World Scientific, 1995.Google Scholar
  21. 21.
    E. Parzen. On estimation of a probability density function and mode. Ann. Math. Stat., 33:1065–1076, 1962.zbMATHMathSciNetCrossRefGoogle Scholar
  22. 22.
    W. H. Press, B. P. Flannery, Teukolsky S. A., and Vetterling W. T. Numerical Recipes. Cambridge University Press, Cambridge, 1988.zbMATHGoogle Scholar
  23. 23.
    H. Schwenk. The diabolo classifier. Neural Computation, in press, 1998.Google Scholar
  24. 24.
    R. Sibson. Studies in the robustness of multidimensional scaling: Procrustes statistices. J. R. Statist. Soc., 40:234–238, 1978.zbMATHGoogle Scholar
  25. 25.
    P. Y. Simard. E.cient computation of complex distance metrics using hierarchical filtering. In Advances in Neural Information Processing Systems. Morgan Kaufmann Publishers, 1994.Google Scholar
  26. 26.
    F. Sinden and G. Wilfong. On-line recognition of handwritten symbols. Technical Report 11228-910930-02IM, AT&T Bell Laboratories, June 1992.Google Scholar
  27. 27.
    V. N. Vapnik. Estimation of dependences based on empirical data. Springer Verlag, 1982.Google Scholar
  28. 28.
    V. N. Vapnikand A. Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Th. Prob. and its Applications, 17(2):264–280, 1971.CrossRefGoogle Scholar
  29. 29.
    N. Vasconcelos and A. Lippman. Multiresolution tangent distance for a.neinvariant classification. In Advances in Neural Information Processing Systems, volume 10, pages 843–849. Morgan Kaufmann Publishers, 1998.Google Scholar
  30. 30.
    J. Voisin and P. Devijver. An application of the multiedit-condensing technique to the reference selection problem in a print recognition system. Pattern Recogntion, 20 No 5:465–474, 1987.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Patrice Y. Simard
    • 1
  • Yann A. LeCun
    • 1
  • John S. Denker
    • 1
  • Bernard Victorri
    • 2
  1. 1.Image Processing Services Research LabAT& T Labs - ResearchRed BankUSA
  2. 2.CNRS, ELSAP, ENSMontrougeFrance

Personalised recommendations