Advertisement

Rank, Trace-Norm and Max-Norm

  • Nathan Srebro
  • Adi Shraibman
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3559)

Abstract

We study the rank, trace-norm and max-norm as complexity measures of matrices, focusing on the problem of fitting a matrix with matrices having low complexity. We present generalization error bounds for predicting unobserved entries that are based on these measures. We also consider the possible relations between these measures. We show gaps between them, and bounds on the extent of such gaps.

Keywords

Generalization Error Random Projection Spectral Norm Sign Matrix Hadamard Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Srebro, N., Rennie, J., Jaakkola, T.: Maximum margin matrix factorization. In: Advances In Neural Information Processing Systems, vol. 17 (2005)Google Scholar
  2. 2.
    Srebro, N., Alon, N., Jaakkola, T.: Generalization error bounds for collaborative prediction with low-rank matrices. In: Advances In Neural Information Processing Systems, vol. 17 (2005)Google Scholar
  3. 3.
    Arriaga, R.I., Vempala, S.: An algorithmic theory of learning: Robust concepts and random projection. In: Proc. of the 40th Foundations of Computer Science (1999)Google Scholar
  4. 4.
    Ben-David, S., Eiron, N., Simon, H.U.: Limitations of learning via embeddings in euclidean half spaces. JMLR 3, 441–461 (2002)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Forster, J., Schmitt, N., Simon, H.U., Suttorp, T.: Estimating the optimal margins of embeddings in euclidean half spaces. Machine Learning 51, 263–281 (2003)zbMATHCrossRefGoogle Scholar
  6. 6.
    Forster, J., Simon, H.U.: On the smallest possible dimension and the largest possible margin of linear arrangements representing given concept classes uniform distribution. In: Cesa-Bianchi, N., Numao, M., Reischuk, R. (eds.) ALT 2002. LNCS (LNAI), vol. 2533, pp. 128–138. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  7. 7.
    Linial, N., Mendelson, S., Schechtman, G., Shraibman, A.: Complexity measures of sign matrices (2004), http://www.cs.huji.ac.il/~nati/PAPERS
  8. 8.
    Alon, N., Frankl, P., Rödel, V.: Geometrical realization of set systems and probabilistic communication complexity. In: Proceedings of the 26th Annual Symposium on the Foundations of Computer Science (FOCS), pp. 227–280 (1985)Google Scholar
  9. 9.
    Srebro, N.: Learning with Matrix Factorization. PhD thesis, Massachusetts Institute of Technology (2004)Google Scholar
  10. 10.
    Seginer, Y.: The expected norm of random matrices. Comb. Probab. Comput. 9, 149–166 (2000)zbMATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Panchenko, D., Koltchinskii, V.: Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics 30 (2002)Google Scholar
  12. 12.
    Pisier, G.: Factorization of linear operators and geometry of Banach spaces. In: Conference Board of the Mathemacial Sciences, vol. 60 (1986)Google Scholar
  13. 13.
    Awerbuch, B., Kleinberg, R.: Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches. In: Proceedings of the 36th ACM Symposium on Theory of Computing, STOC (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Nathan Srebro
    • 1
  • Adi Shraibman
    • 2
  1. 1.Department of Computer ScienceUniversity of TorontoTorontoCanada
  2. 2.Institute of Computer ScienceHebrew UniversityJerusalemIsrael

Personalised recommendations