Advertisement

Online Tracking of Linear Subspaces

  • Koby Crammer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4005)

Abstract

We address the problem of online de-noising a stream of input points. We assume that the clean data is embedded in a linear subspace. We present two online algorithms for tracking subspaces and, as a consequence, de-noising. We also describe two regularization schemas which improve the resistance to noise. We analyze the algorithms in the loss bound model, and specify some of their properties. Preliminary simulations illustrate the usefulness of our algorithms.

Keywords

Linear Subspace Online Algorithm Input Point Clean Data Tradeoff Parameter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Delmas, J.P., Cardoso, J.F.: Performance analysis of an adaptive algorithm for tracking dominant subspaces. IEEE Transactions on Signal Processing 46, 3054–3057 (1998)MathSciNetGoogle Scholar
  2. 2.
    Kumaresanand, R., Tufts, D.W.: Estimating the angles of arrival of multiple plane waves. IEEE Transactions on Aerospace and Electronic Systems 19, 134–139 (1983)CrossRefGoogle Scholar
  3. 3.
    Haimovich, A.M., Bar-Ness, Y.: An eigenanalysis interference canceler. IEEE Transactions on Signal Processing 39, 76–84 (1991)CrossRefGoogle Scholar
  4. 4.
    Kivinen, J., Warmuth, M.K., Hassibi, B.: The p-norm generalization of the lms algorithm for adaptive filtering. In: Proc. 13th IFAC Symposium on System Identification (2003)Google Scholar
  5. 5.
    Sayed, A.H.: Fundementals of Adaptive Filtering. Wiley-Interscience, Chichester (2003)Google Scholar
  6. 6.
    Tax, D.M.J.: One-class classification; Concept-learning in the absence of counter-examples. PhD thesis, Delft University of Technology (2001)Google Scholar
  7. 7.
    Zou, H., Hastie, T., Tibshirani, R.: Sparse principal component analysis (2004)Google Scholar
  8. 8.
    Weinberger, K.Q., Blitzer, J.C., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. In: NIPS 18 (2005)Google Scholar
  9. 9.
    Shalev-Shwartz, S., Singer, Y., Ng, A.Y.: Online and batch learning of pseudo-metrics. In: Proc. of the 21st international conf. on Machine learning, p. 94. ACM Press, New York (2004)Google Scholar
  10. 10.
    Tsuda, K., Rätsch, G., Warmuth, M.K.: Matrix exponentiated gradient updates for on-line learning and bregman projection. Jornal of Machine Learning Research 6, 995–1018 (2005)Google Scholar
  11. 11.
    Golub, G.H., Van Loan, C.F.: Matrix computations. John Hopkins University Press, Baltimore (1989)MATHGoogle Scholar
  12. 12.
    Kivinen, J., Helmbold, D.P., Warmuth, M.: Relative loss bounds for single neurons. IEEE Transactions on Neural Networks 10(6), 1291–1304 (1999)CrossRefGoogle Scholar
  13. 13.
    Crammer, K., Dekel, O., Shalev-Shwartz, S., Singer, Y.: Online passive aggressive algorithms. Advances in Neural Information Processing Systems 16 (2003)Google Scholar
  14. 14.
    Censor, Y., Zenios, S.A.: Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press, New York (1997)MATHGoogle Scholar
  15. 15.
    Crammer, K., Singer, Y.: On the learnability and design of output codes for multiclass problems. Machine Learning 47 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Koby Crammer
    • 1
  1. 1.Department of Computer and Information ScienceUniversity of Pennsylvania 

Personalised recommendations