Abstract
This paper presents both a theoretical discussion and an experimental comparison of batch and incremental learning in an attempt to individuate some of the respective advantages and disadvantages of the two approaches when learning from frequently updated databases. The paper claims that incremental learning might be more suitable for this purpose, although a number of issues remain to be resolved.
Chapter PDF
Keywords
- Classification Accuracy
- Training Instance
- Concept Drift
- High Classification Accuracy
- Incremental Algorithm
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Carbonara, L. and Sleeman, D. (1996). Improving the Efficiency of Knowledge Base Refinement. In Proceedings of the 13th International Conference on Machine Learning (pp. 78–86), Bari, Italy: Morgan Kauffman.
Clark, P. & Nibblet, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–283.
Ginsberg, A. (1988). Automatic Refinement of Expert System Knowledge Bases. Morgan Kaufmann, San Mateo, California.
Kalles, D. & Morris, T. (1996). Efficient Incremental Induction of Decision Trees. Machine Learning, 24, 231–242.
Koppel, M., Feldman, R., & Segre, A.M. (1994). Bias-Driven Revision of Logical Domain Theories. Journal of Artificial Intelligence Research, 1, 159–208.
Michalski, R.S. & Chilauski, R.L. (1980). Learning by being told and learning from examples: An experimental comparison of the two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis. Policy Analysis and Information Systems, 4, 125–160.
Merz, C.J., & Murphy, P.M. (1996). UCI Repository of machine learning databases http://www.ics.uci.edu/~mlearn/MLRepository.html. Irvine, CA: University of California, Department of Information and Computer Science.
Ourston, D. and Mooney, R. (1991). Changing the Rules: A comprehensive approach to Theory Refinement. In Proceedings of the 8th National Conference on Artificial Intelligence (pp. 815–820), Cambridge, MA: MIT Press.
Quinlan, J.R. (1986). Induction of Decision Trees. Machine Learning 1, 81–106.
Quinlan, J.R. (1993). C4.5: Programs for machine learning. San Mateo, California: Morgan Kauffman.
Schlimmer, J.C. & Granger, R.H. (1986). Incremental Learning from Noisy Data. Machine Learning, 1, 317–354.
Utgoff, P.E. (1989). Incremental Induction of decision trees. Machine Learning, 4, 161–186.
Utgoff, P.E. (1994). An Improved Algorithm for Incremental Induction of Decision Trees. In Proceedings of the 11 th International Conference on Machine Learning, New Brunswick, NJ, Morgan Kauffman.
Widmer, G. & Kubat, M. (1996). Learning in the Presence of concept Drift and Hidden Contexts. Machine Learning, 23, 69–101.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Carbonara, L., Borrowman, A. (1998). A comparison of batch and incremental supervised learning algorithms. In: Żytkow, J.M., Quafafou, M. (eds) Principles of Data Mining and Knowledge Discovery. PKDD 1998. Lecture Notes in Computer Science, vol 1510. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0094828
Download citation
DOI: https://doi.org/10.1007/BFb0094828
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65068-3
Online ISBN: 978-3-540-49687-8
eBook Packages: Springer Book Archive