A Bayesian multinet classifier allows a different set of independence assertions among variables in each of a set of local Bayesian networks composing the multinet. The structure of the local network is usually learned using a joint-probability-based score that is less specific to classification, i.e., classifiers based on structures providing high scores are not necessarily accurate. Moreover, this score is less discriminative for learning multinet classifiers because generally it is computed using only the class patterns and avoiding patterns of the other classes. We propose the Bayesian class-matched multinet (BCM2) classifier to tackle both issues. The BCM2 learns each local network using a detection-rejection measure, i.e., the accuracy in simultaneously detecting class patterns while rejecting patterns of the other classes. This classifier demonstrates superior accuracy to other state-of-the-art Bayesian network and multinet classifiers on 32 real-world databases.


Classification Accuracy Bayesian Network Local Network Joint Probability Distribution Class Pattern 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco (1988)Google Scholar
  2. 2.
    Geiger, D., Heckerman, D.: Knowledge representation and inference in similarity networks and Bayesian multinets. Artificial Intelligence 82, 45–74 (1996)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Cheng, J., Greiner, R.: Learning Bayesian belief network classifiers: Algorithms and system. In: Proc. 14th Canadian Conf. on Artificial Intelligence, pp. 141–151 (2001)Google Scholar
  4. 4.
    Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning 29, 131–163 (1997)zbMATHCrossRefGoogle Scholar
  5. 5.
    Huang, K., King, I., Lyu, M.R.: Discriminative training of Bayesian Chow-Liu multinet classifier. In: Proc. Int. Joint Conf. Neural Networks, pp. 484–488 (2003)Google Scholar
  6. 6.
    Kontkanen, P., Myllymaki, P., Sliander, T., Tirri, H.: On supervised selection of Bayesian networks. In: Proc. 15th Conf. on Uncertainty in Artificial Intelligence, pp. 334–342 (1999)Google Scholar
  7. 7.
    Keogh, E.J., Pazzani, M.J.: Learning the structure of augmented Bayesian classifiers. Int. J. on Artificial Intelligence Tools 11, 587–601 (2002)CrossRefGoogle Scholar
  8. 8.
    Pena, J.M., Lozano, J.A., Larranaga, P.: Learning recursive Bayesian multinets for data clustering by means of constructive induction. Machine Learning 47, 63–89 (2002)zbMATHCrossRefGoogle Scholar
  9. 9.
    Meila, M., Jordan, M.I.: Learning with mixtures of trees. J. of Machine Learning Research 1, 1–48 (2000)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Chow, C.K., Liu, C.N.: Approximating discrete probability distributions with dependence trees. IEEE Trans. Info. Theory 14, 462–467 (1968)zbMATHCrossRefGoogle Scholar
  11. 11.
    Spirtes, P., Glymour, C., Scheines, R.: Causation, Prediction and Search, 2nd edn. MIT Press, Cambridge (2000)Google Scholar
  12. 12.
    Gurwicz, Y.: Classification using Bayesian multinets. M.Sc. Thesis. Ben-Gurion University, Israel (2004)Google Scholar
  13. 13.
    Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases (1998),

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Yaniv Gurwicz
    • 1
  • Boaz Lerner
    • 1
  1. 1.Pattern Analysis and Machine Learning Lab, Department of Electrical & Computer EngineeringBen-Gurion UniversityBeer-ShevaIsrael

Personalised recommendations