Advanced Lectures on Machine Learning

Volume 2600 of the series Lecture Notes in Computer Science pp 235-257


Online Learning of Linear Classifiers

  • Jyrki KivinenAffiliated withResearch School of Information Sciences and Engineering, Australian National University

* Final gross prices may vary according to local VAT.

Get Access


This paper surveys some basic techniques and recent results related to online learning.Our focus is on linear classification.The most familiar algorithm for this task is the perceptron.We explain the perceptron algorithm and its convergence proof as an instance of a generic method based on Bregman divergences.This leads to a more general algorithm known as the p -norm perceptron.We give the proof for generalizing the perceptron convergence theorem for the p -norm perceptron and the non-separable case.We also show how regularization,again based on Bregman divergences,can make an online algorithm more robust against target movement.