Advertisement

Multiple Classifier Systems Based on Interpretable Linear Classifiers

  • David J. Hand
  • Niall M. Adams
  • Mark G. Kelly
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2096)

Abstract

Multiple classifier systems fall into two types: classifier combination systems and classifier choice systems. The former aggregate component systems to produce an overall classification, while the latter choose between component systems to decide which classification rule to use. We illustrate each type applied in a real context where practical constraints limit the type of base classifier which can be used. In particular, our context – that of credit scoring – favours the use of simple interpretable, especially linear, forms. Simple measures of classification performance are just one way of measuring the suitability of classification rules in this context.

Keywords

logistic regression perceptron support vector machines product models 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Hand, D.J.: Construction and Assessment of Classification Rules. Chichester: Wiley (1997)MATHGoogle Scholar
  2. [2]
    Kelly, M.G., Hand, D.J., Adams, N.M.: The impact of changing populations on classifier performance. Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ed. Chaudhuri, S., Madigan, D., New York: ACM (1999) 367–371CrossRefGoogle Scholar
  3. [3]
    Kelly, M.G., Hand, D.J., Adams, N.M.: Defining the goals to optimise data mining performance. Proceedings of the Fourth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ed. Agrawal, R., Stolorz, P., Piatetsky-Shapiro, G., Menlo Park: AAAI Press (1998) 234–238Google Scholar
  4. [4]
    Kelly, M.G., Hand, D.J.: Credit scoring with uncertain class definitions. IMA Journal of Mathematics Applied in Business and Industry 10 (1998) 331–345Google Scholar
  5. [5]
    Breiman, L.: Bagging predictors. Machine Learning 24 (1996) 123–140MATHMathSciNetGoogle Scholar
  6. [6]
    Buntine, W.L.: Learning in classification trees. Statistics and Computing 2(1992) 63–73 1992CrossRefGoogle Scholar
  7. [7]
    Oliver, J.J., Hand, D.J.: Averaging over decision trees. Journal of Classification, 13 (1996) 281–297MATHCrossRefGoogle Scholar
  8. [8]
    Genest, C., McConway, K.J.: Allocating the weights in the linear opinion pool. Journal of Forecasting 9 (1990) 53–73CrossRefGoogle Scholar
  9. [9]
    von Winterfeld, D., Edwards, D.: Costs and payoffs in perceptual research. Psychological Bulletin 91 (1982) 213–217Google Scholar
  10. [10]
    Kittler, J., Duin, R.P.W.: Combining classifiers. Proceedings of the International Conference on Pattern Recognition, IEEE (1996) 897–900Google Scholar
  11. [11]
    Mertens, B.J.A, Hand, D.J.: Adjusted estimation for the combination of classifiers. Intelligent Data Analysis 4 (2000) 165–179MATHGoogle Scholar
  12. [12]
    Wolpert, D.H.: Stacked generalisation. Neural Networks 5 (1992) 241–259CrossRefGoogle Scholar
  13. [13]
    Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Annals of Statistics 28 (2000) 337–374MATHCrossRefMathSciNetGoogle Scholar
  14. [14]
    Hand, D.J., Henley, W.E.: Statistical classification methods in consumer credit scoring: a review. Journal of the Royal Statistical Society, Series A 169 (1997) 523–541Google Scholar
  15. [15]
    Hand, D.J., Yu, K.: Idiot’s Bayes–not so stupid after all? To appear in International Statistical Review (2001)Google Scholar
  16. [16]
    Hand, D.J., Adams, N.M.: Defining attributes for scorecard construction. Journal of Applied Statistics 27 (2000) 527–540MATHCrossRefGoogle Scholar
  17. [17]
    Hand, D.J.: Modelling consumer credit risk. Submitted to IMA Journal of Management Mathematics (2000)Google Scholar
  18. [18]
    Hand, D.J., Kelly, M.G.: Superscorecards. Technical Report, Department of Mathematics, Imperial College, London.Google Scholar
  19. [19]
    Hand, D.J.: How and why to choose a better scorecard. Technical Report, Department of Mathematics, Imperial College, London.Google Scholar
  20. [20]
    Scott, M.J.: PARCEL: feature selection in variable cost domains. Doctoral dissertation, Engineering Department, Cambridge University, UK (1999)Google Scholar
  21. [21]
    Provost, F.J., Fawcett, T.: Robust classification systems for imprecise environments. Proceedings of the Fifteenth National Conference on Artificial Intelligence, Madison, WI: AAAI press (1998) 706–713Google Scholar
  22. [22]
    Adams, N.M., Hand D.J.: Comparing classifiers when the misallocation costs are uncertain. Pattern Recognition 32 (1999) 1139–1147CrossRefGoogle Scholar
  23. [23]
    Adams, N.M., Hand D.J.: Improving the practice of classifier performance assessment. Neural Computation 12 (2000) 300–311CrossRefGoogle Scholar
  24. [24]
    Holmes, C.C., Adams, N.M.: A probabilistic nearest neighbor method for statistical pattern recognition. Journal of the Royal Statistical Society, Series, B (2001) in pressGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • David J. Hand
    • 1
  • Niall M. Adams
    • 1
  • Mark G. Kelly
    • 1
  1. 1.Department of MathematicsImperial CollegeLondonUK

Personalised recommendations