Abstract
In this paper, we give a mistake-bound for learning arbitrary linear-threshold concepts that are allowed to change over time in the on-line model of learning. We use a standard variation of the Winnow algorithm and show that the bounds for learning shifting linear-threshold functions have many of the same advantages that the traditional Winnow algorithm has on fixed concepts. These benefits include a weak dependence on the number of irrelevant attributes, inexpensive runtime, and robust behavior against noise. In fact, we show that the bound for the tracking version of Winnow has even better performance with respect to irrelevant attributes. Let X ∈ [0,1] n be an instance of the learning problem. In the traditional algorithm, the bound depends on ln n. In this paper, the shifting concept bound depends approximately on max ln (‖X‖1).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Littlestone, N.: Mistake bounds and linear-threshold learning algorithms. PhD thesis, University of California, Santa Cruz (1989) Technical Report UCSC-CRL-89-11.
Rosenblatt, F.: Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, DC (1962)
Minsky, M. L., Papert, S. A.: Perceptrons. MIT Press, Cambridge, MA (1969)
Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2 (1988) 285–318
Littlestone, N., Warmuth, M. K.: The weighted majority algorithm. Information and Computation 108 (1994) 212–261
Herbster, M., Warmuth, M. K.: Tracking the best expert. Machine Learning 32 (1998) 151–178
Auer, P., Warmuth, M. K.: Tracking the best disjunction. Machine Learning 32 (1998) 127–150
Helmbold, D. P., Long, D. D., Sconyers, T. L., Sherrod, B.: Adaptive disk spin-down for mobile computers. Mobile Networks and Applications 5 (2000) 285–297
Blum, A., Burch, C.: On-line learning and the metrical task system problem. Machine Learning 39 (2000) 35–58
Blum, A., Hellerstein, L., Littlestone, N.: Learning in the presence of finitely or infinitely many irrelevant attributes. In: COLT-91. (1991) 157–166
Herbster, M., Warmuth, M. K.: Tracking the best linear predictor. Journal of Machine Learning Research 1 (2001) 281–309
Kuh, A., Petsche, T., Rivest, R. L.: Learning time-varying concepts. In: NIPS-3, Morgan Kaufmann Publishers, Inc. (1991) 183–189
Littlestone, N.: Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow. In: COLT-91. (1991) 147–156
Grove, A. J., Littlestone, N., Schuurmans, D.: General convergence results for linear discriminant updates. In: COLT-97. (1997) 171–183
Littlestone, N.: (1998) Unpublished research that generalizes Winnow algorithm.
Mesterharm, C.: A multi-class linear learning algorithm related to winnow. In: NIPS-12, MIT Press (2000) 519–525
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Mesterharm, C. (2002). Tracking Linear-Threshold Concepts with Winnow. In: Kivinen, J., Sloan, R.H. (eds) Computational Learning Theory. COLT 2002. Lecture Notes in Computer Science(), vol 2375. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45435-7_10
Download citation
DOI: https://doi.org/10.1007/3-540-45435-7_10
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43836-6
Online ISBN: 978-3-540-45435-9
eBook Packages: Springer Book Archive