A New Perspective on an Old Perceptron Algorithm

  • Shai Shalev-Shwartz
  • Yoram Singer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3559)

Abstract

We present a generalization of the Perceptron algorithm. The new algorithm performs a Perceptron-style update whenever the margin of an example is smaller than a predefined value. We derive worst case mistake bounds for our algorithm. As a byproduct we obtain a new mistake bound for the Perceptron algorithm in the inseparable case. We describe a multiclass extension of the algorithm. This extension is used in an experimental evaluation in which we compare the proposed algorithm to the Perceptron algorithm.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agmon, S.: The relaxation method for linear inequalities. Canadian Journal of Mathematics 6(3), 382–392 (1954)MATHMathSciNetCrossRefGoogle Scholar
  2. 2.
    Bi, J., Zhang, T.: Support vector classification with input data uncertainty. Advances in Neural Information Processing Systems 17 (2004)Google Scholar
  3. 3.
    Block, H.D.: The perceptron: A model for brain functioning. Reviews of Modern Physics 34, 123–135 (1962); Reprinted in ”Neurocomputing” by Anderson and Rosenfeld.MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Blum, A., Dunagan, J.D.: Smoothed analysis of the perceptron algorithm for linear programming. In: SODA (2002)Google Scholar
  5. 5.
    Crammer, K., Dekel, O., Shalev-Shwartz, S., Singer, Y.: Online passive aggressive algorithms. Advances in Neural Information Processing Systems 16 (2003)Google Scholar
  6. 6.
    Crammer, K., Gilad-Bachrach, R., Navot, A., Tishby, N.: Margin analysis of the LVQ algorithm. Advances in Neural Information Processing Systems 15 (2002)Google Scholar
  7. 7.
    Crammer, K., Singer, Y.: Ultraconservative online algorithms for multiclass problems. Jornal of Machine Learning Research 3, 951–991 (2003)MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Floyd, S., Warmuth, M.: Sample compression, learnability, and the Vapnik-Chervonenkis dimension. Machine Learning 21(3), 269–304 (1995)Google Scholar
  9. 9.
    Freund, Y., Schapire, R.E.: Large margin classification using the perceptron algorithm. Machine Learning 37(3), 277–296 (1999)MATHCrossRefGoogle Scholar
  10. 10.
    Gentile, C.: A new approximate maximal margin classification algorithm. Journal of Machine Learning Research 2, 213–242 (2001)CrossRefMathSciNetGoogle Scholar
  11. 11.
    Kivinen, J., Smola, A.J., Williamson, R.C.: Online learning with kernels. In: Advances in Neural Information Processing Systems, vol. 14. MIT Press, Cambridge (2002)Google Scholar
  12. 12.
    Li, Y., Long, P.M.: The relaxed online maximum margin algorithm. Machine Learning 46(1–3), 361–387 (2002)MATHCrossRefGoogle Scholar
  13. 13.
    Minsky, M., Papert, S.: Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge (1969)MATHGoogle Scholar
  14. 14.
    Novikoff, A.B.J.: On convergence proofs on perceptrons. In: Proceedings of the Symposium on the Mathematical Theory of Automata, vol. XII, pp. 615–622 (1962)Google Scholar
  15. 15.
    Rosenblatt, F.: The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review 65, 386–407 (1958); Reprinted in Neurocomputing. MIT Press, Cambridge (1988)Google Scholar
  16. 16.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)MATHGoogle Scholar
  17. 17.
    Weston, J., Watkins, C.: Support vector machines for multi-class pattern recognition. In: Proceedings of the Seventh European Symposium on Artificial Neural Networks (April 1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Shai Shalev-Shwartz
    • 1
    • 2
  • Yoram Singer
    • 1
    • 2
  1. 1.School of Computer Sci. & Eng.The Hebrew UniversityJerusalemIsrael
  2. 2.Google Inc.Mountain ViewUSA

Personalised recommendations