Fast and Effective Single Pass Bayesian Learning
The rapid growth in data makes ever more urgent the quest for highly scalable learning algorithms that can maximize the benefit that can be derived from the information implicit in big data. Where data are too big to reside in core, efficient learning requires minimal data access. Single pass learning accesses each data point once only, providing the most efficient data access possible without resorting to sampling. The AnDE family of classifiers are effective single pass learners. We investigate two extensions to A2DE, subsumption resolution and MI-weighting. Neither of these techniques require additional data access. Both reduce A2DE’s learning bias, improving its effectiveness for big data. Furthermore, we demonstrate that the techniques are complementary. The resulting combined technique delivers computationally efficient low-bias learning well suited to learning from big data.
KeywordsAveraged n-Dependence Estimators Subsumption Resolution Big Data Naive Bayes Bias-Variance Trade-off
Unable to display preview. Download preview PDF.
- 5.Zheng, F., Webb, G.I.: Efficient lazy elimination for averaged one-dependence estimators. In: Proceedings of the Twenty-Third International Conference on Machine Learning (ICML 2006), pp. 1113–1120 (2006)Google Scholar
- 8.Cestnik, B.: Estimating probabilities: A crucial task in machine learning. In: Proceedings of the Ninth European Conference on Artificial Intelligence (ECAI 1990). Pitman, London (1990)Google Scholar
- 9.Kohavi, R., Wolpert, D.: Bias plus variance decomposition for zero-one loss functions. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 275–283. Morgan Kaufmann, San Francisco (1996)Google Scholar