An Adaptive Prequential Learning Framework for Bayesian Network Classifiers
We introduce an adaptive prequential learning framework for Bayesian Network Classifiers which attempts to handle the cost-performance trade-off and cope with concept drift. Our strategy for incorporating new data is based on bias management and gradual adaptation. Starting with the simple Naïve Bayes, we scale up the complexity by gradually increasing the maximum number of allowable attribute dependencies, and then by searching for new dependences in the extended search space. Since updating the structure is a costly task, we use new data to primarily adapt the parameters and only if this is really necessary, do we adapt the structure. The method for handling concept drift is based on the Shewhart P-Chart. We evaluated our adaptive algorithms on artificial domains and benchmark problems and show its advantages and future applicability in real-world on-line learning systems.
Unable to display preview. Download preview PDF.
- 1.Brumen, B., Golob, I., Jaakkola, H., Welzer, T., Rozman, I.: Early Assessment of Classification Performance. Australasian CS Week Frontiers, pp. 91–96 (2004)Google Scholar
- 4.Castillo, G.: Adaptive Learning Algorithms for Bayesian Network Classifiers. PhD. Dissertation, University of Aveiro (2006)Google Scholar
- 8.O’Rourke, J.O.: Computational Geometry in C. Cambridge University Press, Cambridge (1992)Google Scholar
- 9.Sahami, M.: Learning Limited Dependence Bayesian Classifiers. In: Proceedings of KDD 1996, vol. 10, pp. 335–338. AAAI Press, Menlo Park (1996)Google Scholar
- 11.Widmer, G., Kubat, M.: Learning in the Presence of Concept Drift and Hidden Context. Machine Learning 23, 69–101 (1996)Google Scholar