Candidate Elimination Criteria for Lazy Bayesian Rules

  • Geoffrey I. Webb
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2256)

Abstract

Lazy Bayesian Rules modifies naive Bayesian classification to undo elements of the harmful attribute independence assumption. It has been shown to provide classification error comparable to boosting decision trees. This paper explores alternatives to the candidate elimination criterion employed within Lazy Bayesian Rules. Improvements over naive Bayes are consistent so long as the candidate elimination criteria ensures there is sufficient data for accurate probability estimation. However, the original candidate elimination criterion is demonstrated to provide better overall error reduction than the use of a minimum data subset size criterion.

Keywords

machine learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    C. Blake and C. J. Merz. UCI repository of machine learning databases. [Machinereadable data repository]. University of California, Department of Information and Computer Science, Irvine, CA., 2001.Google Scholar
  2. 2.
    B. Cestnik, I. Kononenko, and I. Bratko. ASSISTANT 86: A knowledge-elicitation tool for sophisticated users. In I. Bratko and N. Lavrač, editors, Progress in Machine Learning, pp. 31–45. Sigma Press, Wilmslow, 1987.Google Scholar
  3. 3.
    P. Domingos and M. Pazzani. Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In Proc. Thirteenth International Conference on Machine Learning, pp. 105–112, Bari, Italy, 1996. Morgan Kaufmann.Google Scholar
  4. 4.
    R. Duda and P. Hart. Pattern Classification and Scene Analysis. John Wiley and Sons, New York, 1973.MATHGoogle Scholar
  5. 5.
    U. M. Fayyad and K. B. Irani. Multi-interval discretization of continuous-valued attributes for classification learning. In IJCAI-93: Proc. 13th International Joint Conference on Artificial Intelligence, pp. 1022–1027, Chambery, France, 1993. Morgan Kaufmann.Google Scholar
  6. 6.
    N. Friedman and M. Goldszmidt. Building classifiers using Bayesian networks. In AAAI-96, pp. 1277–1284, 1996.Google Scholar
  7. 7.
    R. Kohavi. Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In KDD-96, Portland, Or, 1996.Google Scholar
  8. 8.
    I. Kononenko. Comparison of inductive and naive Bayesian learning approaches to AUtomatic knowledge acquisition. In B. Wielinga, J. Boose, B. Gaines, G. Schreiber, and M. van Someren, editors, Current Trends in Knowledge Acquisition. IOS Press, Amsterdam, 1990.Google Scholar
  9. 9.
    I. Kononenko. Semi-naive Bayesian classifier. In ECAI-91, pp. 206–219, 1991.Google Scholar
  10. 10.
    P. Langley. Induction of recursive Bayesian classifiers. In Proc. 1993 European Conference on Machine Leanring, pp. 153–164, Vienna, 1993. Springer-Verlag.Google Scholar
  11. 11.
    P. Langley and S. Sage. Induction of selective Bayesian classifiers. In Proc. Tenth Conference on Uncertainty in Artificial Intelligence, pp. 399–406, Seattle, WA, 1994. Morgan Kaufmann.Google Scholar
  12. 12.
    M. J. Pazzani. Constructive induction of Cartesian product attributes. In ISIS: Information, Statistics and Induction in Science, pp. 66–77, Melbourne, Aust., August 1996. World Scientific.Google Scholar
  13. 13.
    M. Sahami. Learning limited dependence Bayesian classifiers. In Proc. 2nd International Conference on Knowledge Discovery and Data Mining, pp. 334–338. AAAI Press, 1996.Google Scholar
  14. 14.
    M. Singh and G. M. Provan. Efficient learning of selective Bayesian network classifiers. In Proc. 13th International Conference on Machine Learning, pp. 453–461, Bari, 1996. Morgan Kaufmann.Google Scholar
  15. 15.
    G. I. Webb. Multiboosting: A technique for combining boosting and wagging. Machine Learning, 40(2):159–196, 2000.CrossRefGoogle Scholar
  16. 16.
    G. I. Webb and M. J. Pazzani. Adjusted probability naive Bayesian induction. In Proc. Eleventh Australian Joint Conference on Artificial Intelligence, pp. 285–295, Brisbane, Australia, 1998. Springer.Google Scholar
  17. 17.
    Z. Zheng and G. I. Webb. Lazy learning of Bayesian Rules. Machine Learning, 41(1):53–84, 2000.CrossRefGoogle Scholar
  18. 18.
    Z. Zheng, G. I. Webb, and K. M. Ting. Lazy Bayesian Rules: A lazy semi-naive Bayesian learning technique competitive to boosting decision trees. In Proc. Sixteenth International Conference on Machine Learning(ICML-99), pp. 493–502, Bled, Slovenia, 1999. Morgan Kaufmann.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Geoffrey I. Webb
    • 1
  1. 1.School of Computing and MathematicsDeakin UniversityGeelong

Personalised recommendations