Regularization on Discrete Spaces

  • Dengyong Zhou
  • Bernhard Schölkopf
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3663)


We consider the classification problem on a finite set of objects. Some of them are labeled, and the task is to predict the labels of the remaining unlabeled ones. Such an estimation problem is generally referred to as transductive inference. It is well-known that many meaningful inductive or supervised methods can be derived from a regularization framework, which minimizes a loss function plus a regularization term. In the same spirit, we propose a general discrete regularization framework defined on finite object sets, which can be thought of as discrete analogue of classical regularization theory. A family of transductive inference schemes is then systemically derived from the framework, including our earlier algorithm for transductive inference, with which we obtained encouraging results on many practical classification problems. The discrete regularization framework is built on discrete analysis and geometry developed by ourselves, in which a number of discrete differential operators of various orders are constructed, which can be thought of as discrete analogues of their counterparts in the continuous case.


Riemannian Manifold Regularization Term Continuous Case Neural Information Processing System Discrete Analogue 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Chung, F.: Spectral Graph Theory. CBMS-NSF Regional Conference Series in Mathematics, vol. 92. SIAM, Philadelphia (1997)zbMATHGoogle Scholar
  2. 2.
    Eells, J., Sampson, J.H.: Harmonic mappings of Riemannian manifolds. American Journal of Mathematics 86, 109–160 (1964)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Hardt, R., Lin, F.H.: Mappings minimizing the L p norm of the gradient. Communications on Pure and Applied Mathematics 40, 556–588 (1987)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Heinonen, J., Kilpeläinen, T., Martio, O.: Nonlinear Potential Theory of Degenerate Elliptic Equations. Oxford University Press, Oxford (1993)zbMATHGoogle Scholar
  5. 5.
    Jensen, R.: Uniqueness of Lipschitz extensions: minimizing the sup-norm of the gradient. Arch. Rat. Mech. Anal. 123(1), 51–74 (1993)zbMATHCrossRefGoogle Scholar
  6. 6.
    Jost, J.: Riemannian Geometry and Geometric Analysis, 3rd edn. Springer, Heidelberg (2002)zbMATHGoogle Scholar
  7. 7.
    Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-posed Problems. W.H. Winston, Washington (1977)zbMATHGoogle Scholar
  8. 8.
    Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998)zbMATHGoogle Scholar
  9. 9.
    Wahba, G.: Spline Models for Observational Data. CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 59. SIAM, Philadelphia (1990)zbMATHGoogle Scholar
  10. 10.
    Yamasaki, M.: Ideal boundary limit of discrete Dirichlet functions. Hiroshima Math. J. 16(2), 353–360 (1986)zbMATHMathSciNetGoogle Scholar
  11. 11.
    Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Advances in Neural Information Processing Systems, vol. 16. MIT Press, Cambridge (2004)Google Scholar
  12. 12.
    Zhou, D., Schölkopf, B., Hofmann, T.: Semi-supervised learning on directed graphs. In: Advances in Neural Information Processing Systems, vol. 17. MIT Press, Cambridge (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Dengyong Zhou
    • 1
  • Bernhard Schölkopf
    • 1
  1. 1.Max Planck Institute for Biological CyberneticsTuebingenGermany

Personalised recommendations