Machine Learning

, Volume 26, Issue 1, pp 5–23 | Cite as

Empirical Support for Winnow and Weighted-Majority Algorithms: Results on a Calendar Scheduling Domain

  • Avrim Blum


This paper describes experimental results on using Winnow and Weighted-Majority based algorithms on a real-world calendar scheduling domain. These two algorithms have been highly studied in the theoretical machine learning literature. We show here that these algorithms can be quite competitive practically, outperforming the decision-tree approach currently in use in the Calendar Apprentice system in terms of both accuracy and speed. One of the contributions of this paper is a new variant on the Winnow algorithm (used in the experiments) that is especially suited to conditions with string-valued classifications, and we give a theoretical analysis of its performance. In addition we show how Winnow can be applied to achieve a good accuracy/coverage tradeoff and explore issues that arise such as concept drift. We also provide an analysis of a policy for discarding predictors in Weighted-Majority that allows it to speed up as it learns.

Winnow Weighted-Majority Multiplicative algorithms 


  1. Armstrong, R., Freitag, D., Joachims, T., & Mitchell, T. (1995). Webwatcher: A learning apprentice for the world wide web. In 1995 AAAI Spring Symposium on Information Gathering from Heterogeneous Distributed Environments.Google Scholar
  2. Blum, A. (1992). Learning boolean functions in an infinite attribute space. Machine Learning, 9:373-386.Google Scholar
  3. Blum, A., Hellerstein, L., & Littlestone, N. (1991). Learning in the presence of finitely or infinitely many irrelevant attributes. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 157-166, Santa Cruz, California. Morgan Kaufmann.Google Scholar
  4. Caruana, R. & Freitag, D. (1994). Greedy attribute selection. In Proceedings of the Eleventh International Conference on Machine Learning.Google Scholar
  5. Cesa-Bianchi, N., Freund, Y., Helmbold, D., Haussler, D., Schapire, R., & Warmuth, M. (1993). How to use expert advice. In Proceedings of the Annual ACM Symp. on the Theory of Computing, pages 382-391.Google Scholar
  6. Dent, L., Boticario, J., McDermott, J., Mitchell, T., & Zabowski, D. (1992). A personal learning apprentice. In Proceedings of the 1992 National Conference on Artificial Intelligence.Google Scholar
  7. DeSantis, A., Markowsky, G., & Wegman, M. (1988). Learning probabilistic prediction functions. In Proceedings of the29th IEEE Symposium on Foundations of Computer Science, pages 110-119.Google Scholar
  8. Feller, W. (1968). An Introduction to Probability and its Applications, volume 1. John Wiley and Sons, third edition.Google Scholar
  9. Jourdan, J., Dent, L., McDermott, J., & Zabowski, D. (1991). Interfaces that learn: A learning apprentice for calendar management. Technical Report CMU-CS-91-135, Carnegie Mellon University.Google Scholar
  10. Littlestone, N. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285-318.Google Scholar
  11. Littlestone, N. (1989). Mistake bounds and logarithmic linear-threshold learning algorithms. PhD thesis, U. C. Santa Cruz.Google Scholar
  12. Littlestone, N. (1991). Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 147-156, Santa Cruz, California. Morgan Kaufmann.Google Scholar
  13. Littlestone, N. & Warmuth, M. K. (1994). The weighted majority algorithm. Information and Computation, 108(2):212-261.Google Scholar
  14. Mitchell, T., Caruana, R., Freitag, D., McDermott, J., & Zabowski, D. (1994). Experience with a personal learning assistant. CACM, 37(7):81-91.Google Scholar

Copyright information

© Kluwer Academic Publishers 1997

Authors and Affiliations

  • Avrim Blum
    • 1
  1. 1.School of Computer ScienceCarnegie Mellon UniversityPittsburgh

Personalised recommendations