Advertisement

Model switching for bayesian classification trees with soft splits

  • Jörg Kindermann
  • Gerhard Paass
Communications Session 6. Tree Construction
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1510)

Abstract

Due to the high number of insolvencies in the credit business, automatic procedures for testing the credit-worthiness of enterprises become increasingly important. For this task we use classification trees with soft splits which assign the observations near the split boundary to both branches. Tree models involve an extra complication as the number of parameters varies as the tree grows and shrinks. Hence we adapt the reversible jump Markov Chain Monte Carlo procedure to this model which produces an ensemble of trees representing the posterior distribution. For a real-world credit-scoring application our algorithm yields lower classification errors than bootstrapped versions of regression trees (CART), neural networks, and adaptive splines (MARS). The predictive distribution allows to assess the certainty of credit decisions for new cases and guides the collection of additional information.

References

  1. 1.
    J.O. Berger, Stat. Dec. Theory, Foundations, Concepts and Methods. Springer, NY 1980.Google Scholar
  2. 2.
    L. Breiman, J.H. Friedman, R. Olshen, and C.J. Stone. Classif. and Regr. Trees. Wadsworth Int. Group, Belmont, CA, 1984.Google Scholar
  3. 3.
    W. Buntine. Learning classification trees. Statistics and Computing, 2:63–73, 1992.CrossRefGoogle Scholar
  4. 4.
    C. Carter and J. Catlett. Assessing credit card appl. using machine learning. IEEE Expert, 2(3):71–79, 1987.CrossRefGoogle Scholar
  5. 5.
    H. Chipman, E. George, and R. McCulloch. Bayesian CART. TR, Dept. of Stat., Univ. of Texas, Austin, 1995.Google Scholar
  6. 6.
    J. H. Friedman Multivariate adaptive regression splines.Ann. of Stat., 19(1):1–67, 1991.MATHGoogle Scholar
  7. 7.
    J.H. Friedman. Local learning based on recursive covering. TR, Stanford Uni, August 1996.Google Scholar
  8. 8.
    S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural Computation 4, p.58, 1992.Google Scholar
  9. 9.
    P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. TR, Bristol Univ., 1995.Google Scholar
  10. 10.
    R.A. Jacobs, M.I. Jordan, S.J. Nowlan, and G.E. Hinton. Adaptive mixtures of local experts. Neural Computation, 3:79–87, 1991.Google Scholar
  11. 11.
    Gerhard Paaß. Assessing and improving neural network predictions by the bootstrap algorithm. In S. Hanson, J. Cowan, and C. Giles, editors, NIPS-5, pages 196–203. Morgan Kaufman, San Mateo, CA, San Mateo, CA., 1993.Google Scholar
  12. 12.
    J.R. Quinlan. C4.5: Prog. f. Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.Google Scholar
  13. 13.
    B.D. Ripley. Pattern Recog. and Neural Networks. Cambridge Univ. Press, 1996.Google Scholar
  14. 14.
    Waterhouse S.R. Classification and Regression using Mixtures of Experts. PhD thesis, Cambridge Univ. Engineering Dept., October 1997.Google Scholar
  15. 15.
    L. Tierney. Markov chains for expl. post. distr. TR 560, School of Stat., UMinnesota, 1994.Google Scholar
  16. 16.
    J. Wallrafen. Kreditwürdigkeitsprüfung von Unternehmen mit neuronalen Klassifikationsverfahren. Master’s thesis, University of Erlangen-Nürnberg, 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Jörg Kindermann
    • 1
  • Gerhard Paass
    • 1
  1. 1.RWCP-Theoretical Foundation LabGMD-German National Research Center for Information TechnologySankt AugustinGermany

Personalised recommendations