Constant Rate Approximate Maximum Margin Algorithms
We present a new class of Perceptron-like algorithms with margin in which the “effective” learning rate η eff, defined as the ratio of the learning rate to the length of the weight vector, remains constant. We prove that for η eff sufficiently small the new algorithms converge in a finite number of steps and show that there exists a limit of the parameters involved in which convergence leads to classification with maximum margin. A soft margin extension for Perceptron-like large margin classifiers is also discussed.
KeywordsWeight Vector Learning Rate Training Pattern Maximum Margin Extended Space
- 1.Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)Google Scholar
- 2.Duda, R.O., Hart, P.E.: Pattern Classsification and Scene Analysis. Wiley, Chichester (1973)Google Scholar
- 5.Krauth, W., Mézard, M.: Learning algorithms with optimal stability in neural networks. Journal of Physics A 20, L745–L752 (1987)Google Scholar
- 7.Shawe-Taylor, J., Cristianini, N.: Further results on the margin distribution. In: COLT 1999, pp. 278–285 (1999)Google Scholar