Skip to main content

Tracking Linear-Threshold Concepts with Winnow

  • Conference paper
  • First Online:
Computational Learning Theory (COLT 2002)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2375))

Included in the following conference series:

Abstract

In this paper, we give a mistake-bound for learning arbitrary linear-threshold concepts that are allowed to change over time in the on-line model of learning. We use a standard variation of the Winnow algorithm and show that the bounds for learning shifting linear-threshold functions have many of the same advantages that the traditional Winnow algorithm has on fixed concepts. These benefits include a weak dependence on the number of irrelevant attributes, inexpensive runtime, and robust behavior against noise. In fact, we show that the bound for the tracking version of Winnow has even better performance with respect to irrelevant attributes. Let X ∈ [0,1] n be an instance of the learning problem. In the traditional algorithm, the bound depends on ln n. In this paper, the shifting concept bound depends approximately on max ln (‖X1).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Littlestone, N.: Mistake bounds and linear-threshold learning algorithms. PhD thesis, University of California, Santa Cruz (1989) Technical Report UCSC-CRL-89-11.

    Google Scholar 

  2. Rosenblatt, F.: Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, DC (1962)

    MATH  Google Scholar 

  3. Minsky, M. L., Papert, S. A.: Perceptrons. MIT Press, Cambridge, MA (1969)

    MATH  Google Scholar 

  4. Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2 (1988) 285–318

    Google Scholar 

  5. Littlestone, N., Warmuth, M. K.: The weighted majority algorithm. Information and Computation 108 (1994) 212–261

    Article  MATH  MathSciNet  Google Scholar 

  6. Herbster, M., Warmuth, M. K.: Tracking the best expert. Machine Learning 32 (1998) 151–178

    Article  MATH  Google Scholar 

  7. Auer, P., Warmuth, M. K.: Tracking the best disjunction. Machine Learning 32 (1998) 127–150

    Article  MATH  Google Scholar 

  8. Helmbold, D. P., Long, D. D., Sconyers, T. L., Sherrod, B.: Adaptive disk spin-down for mobile computers. Mobile Networks and Applications 5 (2000) 285–297

    Article  MATH  Google Scholar 

  9. Blum, A., Burch, C.: On-line learning and the metrical task system problem. Machine Learning 39 (2000) 35–58

    Article  MATH  Google Scholar 

  10. Blum, A., Hellerstein, L., Littlestone, N.: Learning in the presence of finitely or infinitely many irrelevant attributes. In: COLT-91. (1991) 157–166

    Google Scholar 

  11. Herbster, M., Warmuth, M. K.: Tracking the best linear predictor. Journal of Machine Learning Research 1 (2001) 281–309

    Article  MATH  MathSciNet  Google Scholar 

  12. Kuh, A., Petsche, T., Rivest, R. L.: Learning time-varying concepts. In: NIPS-3, Morgan Kaufmann Publishers, Inc. (1991) 183–189

    Google Scholar 

  13. Littlestone, N.: Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow. In: COLT-91. (1991) 147–156

    Google Scholar 

  14. Grove, A. J., Littlestone, N., Schuurmans, D.: General convergence results for linear discriminant updates. In: COLT-97. (1997) 171–183

    Google Scholar 

  15. Littlestone, N.: (1998) Unpublished research that generalizes Winnow algorithm.

    Google Scholar 

  16. Mesterharm, C.: A multi-class linear learning algorithm related to winnow. In: NIPS-12, MIT Press (2000) 519–525

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mesterharm, C. (2002). Tracking Linear-Threshold Concepts with Winnow. In: Kivinen, J., Sloan, R.H. (eds) Computational Learning Theory. COLT 2002. Lecture Notes in Computer Science(), vol 2375. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45435-7_10

Download citation

  • DOI: https://doi.org/10.1007/3-540-45435-7_10

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43836-6

  • Online ISBN: 978-3-540-45435-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics