Lazy Averaged One-Dependence Estimators

  • Liangxiao Jiang
  • Harry Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4013)


Naive Bayes is a probability-based classification model based on the conditional independence assumption. In many real-world applications, however, this assumption is often violated. Responding to this fact, researchers have made a substantial amount of effort to improve the accuracy of naive Bayes by weakening the conditional independence assumption. The most recent work is the Averaged One-Dependence Estimators (AODE) [15] that demonstrates good classification performance. In this paper, we propose a novel lazy learning algorithm Lazy Averaged One-Dependence Estimators, simply LAODE, by extending AODE. For a given test instance, LAODE firstly expands the training data by adding some copies (clones) of each training instance according to its similarity to the test instance, and then uses the expanded training data to build an AODE classifier to classify the test instance. We experimentally test our algorithm in Weka system [16], using the whole 36 UCI data sets [11] recommended by Weka [17], and compare it to naive Bayes [3], AODE [15], and LBR [19]. The experimental results show that LAODE significantly outperforms all the other algorithms used to compare.


Bayesian Network Test Instance Training Instance Attribute Node Conditional Independence Assumption 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aha, D.W.: Lazy Learning. Kluwer Academic, Dordrecht (1997)MATHGoogle Scholar
  2. 2.
    Chickering, D.M.: Learning Bayesian networks is NP-Complete. In: Fisher, D., Lenz, H. (eds.) Learning from Data: Artificial Intelligence and Statistics V, pp. 121–130. Springer, Heidelberg (1996)Google Scholar
  3. 3.
    Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. John Wiley, New Yaork (1973)MATHGoogle Scholar
  4. 4.
    Frank, E., Hall, M., Pfahringer, B.: Locally Weighted Naive Bayes. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence (2003), pp. 249–256. Morgan Kaufmann, San Francisco (2003)Google Scholar
  5. 5.
    Friedman, J., Kohavi, R., Yun, Y.: Lazy decision trees. In: Proceedings of the Thirteenth National Conference on Artificial Intelligence, pp. 717–724. AAAI Press, Menlo Park (1996)Google Scholar
  6. 6.
    Friedman, Geiger, Goldszmidt,: Bayesian Network Classifiers. Machine Learning 29, 131–163 (1997)MATHCrossRefGoogle Scholar
  7. 7.
    Keogh, E.J., Pazzani, M.J.: Learning augmented Naive Bayes classifiers. In: Proceedings of the Seventh International Workshop on AI and Statistics (1999)Google Scholar
  8. 8.
    Kohavi, R.: Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-1996), pp. 202–207. AAAI Press, Menlo Park (1996)Google Scholar
  9. 9.
    Langley, P., Sage, S.: Induction of selective Bayesian classifiers. In: Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 339–406 (1994)Google Scholar
  10. 10.
    Jiang, L., Zhang, H., Su, J.: Instance Cloning Local Naive Bayes. In: Zhang, S., Jarvis, R. (eds.) AI 2005. LNCS (LNAI), vol. 3809, pp. 280–291. Springer, Heidelberg (2005)Google Scholar
  11. 11.
    Merz, C., Murphy, P., Aha, D.: UCI repository of machine learning databases. Dept of ICS, University of California, Irvine (1997),
  12. 12.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Francisco (1988)Google Scholar
  13. 13.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  14. 14.
    Sahami, M.: Learning Limited Dependence Bayesian Classifiers. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 335–338. AAAI Press, Menlo Park (1996)Google Scholar
  15. 15.
    Webb, G.I., Boughton, J., Wang, Z.: Not so naive Bayes: Aggregating one-dependence estimators. Machine Learning 58, 5–24 (2005)MATHCrossRefGoogle Scholar
  16. 16.
    Witten, I.H., Frank, E.: data mining-Practical Machine Learning Tools and Techniques with Java Implementation. Morgan Kaufmann, San Francisco (2000)Google Scholar
  17. 17.
  18. 18.
    Xie, Z., Hsu, W., Liu, Z., Lee, M.: SNNB: A Selective Neighborhood Based Naive Bayes for Lazy Learning. In: Proceedings of the Sixth Pacific-Asia Conference on KDD, pp. 104–114. Springer, Heidelberg (2002)Google Scholar
  19. 19.
    Zheng, Z., Webb, G.I.: Lazy Learning of Bayesian Rules. Machine Learning 41, 53–84 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Liangxiao Jiang
    • 1
  • Harry Zhang
    • 2
  1. 1.Faculty of Computer ScienceChina University of GeosciencesWuhanP.R.China
  2. 2.Faculty of Computer ScienceUniversity of New BrunswickFrederictonCanada

Personalised recommendations