Advertisement

Tuning Cost-Sensitive Boosting and Its Application to Melanoma Diagnosis

  • Stefano Merler
  • Cesare Furlanello
  • Barbara Larcher
  • Andrea Sboner
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2096)

Abstract

This paper investigates a methodology for effective model selection of cost-sensitive boosting algorithms. In many real situations, e.g. for automated medical diagnosis, it is crucial to tune the classification performance towards the sensitivity and specificity required by the user. To this purpose, for binary classification problems, we have designed a cost-sensitive variant of AdaBoost where (1) the model error function is weighted with separate costs for errors (false negative and false positives) in the two classes, and (2) the weights are updated differently for negatives and positives at each boosting step. Finally, (3) a practical search procedure allows to get into or as close as possible to the sensitivity and specificity constraints without an extensive tabulation of the ROC curve. This off-the-shelf methodology was applied for the automatic diagnosis of melanoma on a set of 152 skin lesions described by geometric and colorimetric features, out-performing, on the same data set, skilled dermatologists and a specialized automatic system based on a multiple classifier combination.

Keywords

AdaBoost Algorithm Tuning Procedure Melanoma Diagnosis Separate Cost Melanoma Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    N. Adams and D. Hand, “Classifier performance assessment,” Neural Computation, vol. 12,no. 2, pp. 305–311, 2000.CrossRefGoogle Scholar
  2. 2.
    L. Bischof, H. Talbot, E. Breen, D. Lovell, D. Chan, G. Stone, S. Menzies, A. Gutenev, and R. Caffin, “An automated melanoma diagnosis system,” in New Approaches in Medical Image Analysis (B. Pham, M. Braun, A. Maeder, and M. Eckert, eds.), SPIE, 1999.Google Scholar
  3. 3.
    E. Blanzieri, C. Eccher, S. Forti, and A. Sboner, “Exploiting classifier combination for early melanoma diagnosis support,” in Proceedings of ECML-2000 (R. de Mantaras and E. Plaza, eds.), (Berlin), pp. 55–62, Springer-Verlag, 2000.Google Scholar
  4. 4.
    L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and Regression Trees. Pacific Grove CA: Wadsworth and Brooks/Cole, 1984.zbMATHGoogle Scholar
  5. 5.
    L. Breiman, “Combining predictors,” in Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems (A. Sharkey, ed.), (London), Springer-Verlag, 1999. pages 31–50.Google Scholar
  6. 6.
    T. Dietterich, “An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization,” Machine Learning, vol. 40,no. 2, pp. 139–158, 2000.CrossRefGoogle Scholar
  7. 7.
    W. Fan, S. Stolfo, J. Zhang, and P. Chan, “Adacost: Misclassification cost-sensitive boosting,” in Proceedings of ICML-99, 1999.Google Scholar
  8. 8.
    Y. Freund and R. Schapire, “A decision-theoretic generalization of online learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55,no. 1, pp. 119–139, 1997.zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    J. Friedman, T. Hastie, and R. Tibshirani, “Additive logistic regression: a statistical view of boosting,” tech. rep., Stanford University, 1999.Google Scholar
  10. 10.
    C. Furlanello and S. Merler, “Boosting of tree-based classifiers for predictive risk modeling in gis,” in Multiple Classifier Systems (J. Kittler and F. Roli, eds.), vol. 1857, (Amsterdam), Springer, 2000. pages 220–229.Google Scholar
  11. 11.
    G. Karakoulas and J. Shawe-Taylor, “Optimizing classifiers for imbalanced training sets,” in Advances in Neural Information Processing Systems 11 (M. Kearns, S. Solla, and D. Cohn, eds.), MIT Press, 1999.Google Scholar
  12. 12.
    D. Margineantu and T. Dietterich, “Bootstrap methods for the cost-sensitive evaluation of classifiers,” in Proceedings of ICML-2000, pp. 583–590, Morgan Kaufmann, 2000.Google Scholar
  13. 13.
    R. Schapire, Y. Freund, P. Bartlett, and W. Lee, “Boosting the margin: a new explanation for the effectiveness of voting methods,” The Annals of Statistics, vol. 26,no. 5, pp. 1651–1686, 1998.zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    K. Ting, “A comparative study study of cost-sensitive boosting algorithms,” in Proceedings of ICML-2000 (M. Kaufmann, ed.), pp. 983–990, 2000.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Stefano Merler
    • 1
  • Cesare Furlanello
    • 1
  • Barbara Larcher
    • 1
  • Andrea Sboner
    • 1
  1. 1.ITC-irstTrentoItaly

Personalised recommendations