Techniques for Efficient Learning without Search

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7301)


Averaged n-Dependence Estimators (AnDE) is a family of learning algorithms that range from low variance coupled with high bias through to high variance coupled with low bias. The asymptotic error of the lowest bias variant is the Bayes optimal. The AnDE family of algorithms have a training time that is linear with respect to the training examples, learn in a single pass through the data, support incremental learning, handle missing values directly and are robust in the face of noise. These characteristics make the algorithms particularly well suited to learning from large data. However, for higher orders of n they are very computationally demanding. This paper presents data structures and algorithms developed to reduce both memory and time for training and classification. These enhancements have enabled the evaluation and comparison of A3DE’s effectiveness. The results provide further support for the hypothesis that as the number of training examples increases, decreasing error will be attained by members of the AnDE family with increasing levels of n.


naive Bayes semi-naive Bayes probabilistic prediction 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Blake, C.L., Merz, C.J.: UCI Repository of Machine Learning Databases,
  2. 2.
    Boyer, B.: Robust Java benchmarking (2008),
  3. 3.
    Brain, D., Webb, G.I.: The Need for Low Bias Algorithms in Classification Learning From Large Data Sets. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) PKDD 2002. LNCS (LNAI), vol. 2431, pp. 62–73. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Coffey, N.: Classmexer agent,
  5. 5.
    Fayyad, U., Irani, K.: Multi-interval discretization of continuous-valued attributes for classification learning. In: Proc. of the 13th Int. Joint Conference on Artificial Intelligence, pp. 1022–1029. Morgan Kaufmann (1993)Google Scholar
  6. 6.
    Hui, B., Yang, Y., Webb, G.I.: Anytime classification for a pool of instances. Machine Learning 77(1), 61–102 (2009)CrossRefGoogle Scholar
  7. 7.
    Keogh, E., Pazzani, M.: Learning augmented Bayesian classifiers: A comparison of distribution-based and classification-based approaches. In: Proc. of the International Workshop on Artificial Intelligence and Statistics, pp. 225–230 (1999)Google Scholar
  8. 8.
    Webb, G.I.: Multiboosting: A technique for combining boosting and wagging. Machine Learning 40(2), 159–196 (2000)CrossRefGoogle Scholar
  9. 9.
    Webb, G.I., Boughton, J., Wang, Z.: Not so naive Bayes: Aggregating one-dependence estimators. Machine Learning 58(1), 5–24 (2005)zbMATHCrossRefGoogle Scholar
  10. 10.
    Webb, G.I., Boughton, J., Zheng, F., Ting, K.M., Salem, H.: Learning by extrapolation from marginal to full-multivariate probability distributions: Decreasingly naive Bayesian classification. Machine Learning 86(2), 233–272 (2012), doi:10.1007/s10994-011-5263-6CrossRefGoogle Scholar
  11. 11.
    Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann (2005)Google Scholar
  12. 12.
    Zheng, F., Webb, G.I.: A comparative study of semi-naive bayes methods in classification learning. In: Simoff, S.J., Williams, G.J., Galloway, J., Kolyshakina, I. (eds.) Proc. of the 4th Australasian Data Mining Conference (AusDM 2005), pp. 141–156 (2005)Google Scholar
  13. 13.
    Zheng, Z., Webb, G.I.: Lazy learning of Bayesian rules. Machine Learning 41(1), 53–84 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Faculty of Information TechnologyMonash UniversityAustralia

Personalised recommendations